Regression Alert: Revisiting Preseason Expectations

At what point should we stop caring about what we thought about players before the season began?

In October of 2013, I wondered just how many weeks it took before early-season performance wasn't a fluke anymore. In "Revisiting Preseason Expectations", I looked back at the 2012 season and compared how well production in a player's first four games predicted production in his last 12 games. And since that number was meaningless without context, I compared how his preseason ADP predicted production in his last 12 games.

It was a fortuitous time to ask that question, as it turns out, because I discovered that after four weeks in 2012, preseason ADP still predicted performance going forward than early season production did.

This is the kind of surprising result that I love, but the thing about surprising results is that sometimes the reason they're surprising is really just because they're flukes. So in October of 2014, I revisited "Revisiting Preseason Expectations". This time I found that in the 2013 season, preseason ADP and week 1-4 performance held essentially identical predictive power for the rest of the season.

With two different results in two years, I decided to keep up my quest for a definitive answer about whether early-season results or preseason expectations were more predictive down the stretch. In October of 2015, I revisited my revisitation of "Revisiting Preseason Expectations". This time, I found that early-season performance held a slight predictive edge over preseason ADP.

With things still so inconclusive, in October of 2016, I decided to revisit my revisitation of the revisited "Revisiting Preseason Expectations". As in 2015, I found that this time early-season performance carried slightly more predictive power than early-season performance.

And now, as you've probably guessed, it's time for an autumn tradition as sacred as turning off the lights and pretending I'm not home on October 31st. It's time for "Revisiting Preseason Expectations"! (Or, I guess technically for Revisiting Revisiting Revisiting Revisiting Revisiting Preseason Expectations.)


If you've read the previous pieces, you have a rough idea of how this works, but here's a quick rundown of the methodology. I have compiled a list of the top 24 quarterbacks, 36 running backs, 48 wide receivers, and 24 tight ends according to MFL’s 2016 preseason ADP.

From that list, I have removed any player who missed more than one of his team’s first four games or more than two of his team’s last twelve games so that any fluctuations represent performance and not injury. As always, we’re looking by team games rather than by week, so players with an early bye aren't skewing the comparisons.

I’ve used PPR scoring for this exercise, because that was easier for me to look up with the databases I had on hand. For the remaining players, I tracked where they ranked at their position over the first four games and over the final twelve games. Finally, I’ve calculated the correlation between preseason ADP and stretch performance, as well as the correlation between early performance and stretch performance.

Here's the data.


Cam Newton 1 6 21
Aaron Rodgers 2 3 1
Russell Wilson 3 20 9
Andrew Luck 4 7 6
Ben Roethlisberger 5 5 23
Drew Brees 6 2 3
Blake Bortles 8 9 13
Carson Palmer 9 21 18
Eli Manning 10 23 22
Derek Carr 11 4 17
Philip Rivers 11 12 15
Jameis Winston 12 13 16
Kirk Cousins 13 14 5
Matthew Stafford 14 8 11
Matt Ryan 15 1 4
Tyrod Taylor 16 16 10
Marcius Mariota 17 26 8
Andy Dalton 18 19 14
Dak Prescott 19 10 7
Alex Smith 20 17 24
Brock Osweiler 21 25 28

The correlation between ADP and late-season performance was 0.200.
The correlation between early-season performance and late-season performance was 0.404.


Todd Gurley 1 24 15
David Johnson 2 2 1
Ezekiel Elliott 3 9 3
Lamar Miller 6 18 18
Devonta Freeman 7 13 4
Mark Ingram 10 12 9
LeSean McCoy 11 4 5
Latavius Murray 15 20 13
Matt Forte 16 10 30
DeMarco Murray 17 1 6
Jeremy Hill 19 25 24
Melvin Gordon 22 3 11
Duke Johnson 26 30 36
Frank Gore 30 14 14
Derrick Henry 32 56 44
LeGarrette Blount 34 15 10
T.J. Yeldon 35 22 42
Spencer Ware 36 17 20

The correlation between ADP and late-season performance was 0.597.
The correlation between early-season performance and late-season performance was 0.768.


Antonio Brown 1 2 4
Odell Beckham Jr. 2 29 1  
Julio Jones 3 3 15
DeAndre Hopkins 5 31 34
Allen Robinson 6 18 36
Dez Bryant 7 70 27
Jordy Nelson 8 7 2
Mike Evans 9 6 3
Brandon Marshall 11 36 51
Amari Cooper 12 27 13
Brandin Cooks 14 22 9
T.Y. Hilton 15 11 5
Demaryius Thomas 17 14 21
Jarvis Landry 18 8 22
Julian Edelman 19 46 10
Kelvin Benjamin 21 15 39
Doug Baldwin 22 12 11
Golden Tate 23 80 8
Michael Floyd 27 50 74
Larry Fitzgerald 28 10 14
Jordan Matthews 30 21 49
Emmanuel Sanders 31 9 30
Tyler Lockett 33 87 53
Marvin Jones 34 4 65
DeSean Jackson 35 39 37
John Brown 36 47 86
Michael Crabtree 37 5 23
Sterling Shepard 38 19 43
DeVante Parker 39 57 47
Willie Snead 42 30 31
Davante Adams 43 35 7
Tavon Austin 44 52 52
Devin Funchess 46 93 79

The correlation between ADP and late-season performance was 0.551.
The correlation between early-season performance and late-season performance was 0.447.


Greg Olsen 3 1 6
Travis Kelce 4 4 1
Delanie Walker 5 16 3
Coby Fleener 6 14 16
Gary Barnidge 7 15 20
Jimmy Graham 11 6 5
Martellus Bennett 12 7 10
Jason Witten 14 13 13
Dwayne Allen 17 38 24
Richard Rodgers 19 34 37
Jesse James 20 17 32
Kyle Rudolph 22 3 4
Charles Clay 23 27 12

The correlation between ADP and late-season performance was 0.461.
The correlation between early-season performance and late-season performance was 0.723.


You may not have guessed it by the individual correlations, which tilted rather strongly towards early-season performance at three positions and weakly towards ADP at the fourth, but the correlation between preseason ADP and late-season performance across all positions was 0.599, while the correlation between early-season performance and late-season performance was 0.585. (The fact that there were more WRs in the sample than RBs and TEs combined explains a lot of that tilt.)

After five years of running this article and seven years of collected data, how do things stand? Here are the correlations at each position. (I've only run positional breakdowns for the past three years, hence the shorter charts.)

2014 0.422 -0.019
2015 0.260 0.215
2016 0.200 0.404
2014 0.568 0.472
2015 0.309 0.644
2016 0.597 0.768
2014 0.333 0.477
2015 0.648 0.632
2016 0.551 0.447
2014 -0.051 0.416
2015 0.295 0.559
2016 0.461 0.723
2010-2012 0.578 0.471
2013 0.649 0.655
2014 0.466 0.560
2015 0.548 0.659
2016 0.599 0.585

I don't see any obvious trends suggesting that preseason ADP is better at one position while early-season performance is better at another. At quarterback, running back, and tight end there have been two seasons favoring one and one season favoring the other. It's possible that early-season performance is more predictive than ADP at tight end, but I wouldn't be especially confident in that, because random chance alone will also produce three-year runs like that fairly regularly.

Overall, though, this year just reinforces my prior belief that four games worth of stats gives us no more and no less information on a player than an offseason of study. If one person drafted a new team today straight from preseason ADP, and another drafted straight from current year-to-date rankings, both teams would probably do about equally well.

But the idea that it has to be either preseason ADP or early-season production is a false dichotomy. Most of us are closet Bayesians, which means we start with an opinion and update it with new evidence. In that case, we've reached the point of the season where we should give roughly equal weight to both factors.

Indeed, a simple average of preseason ADP and ranking through four games correlates with rest-of-year outcomes better than either factor alone, last year producing a robust 0.682. And I've demonstrated in the past that an award-winning projector like Bob Henry can outperform even that average, though for everything we do we'll probably never get much higher than correlations of 0.700.

(As an aside, in the past I've seen a study similar to this that used points scored from the previous year instead of preseason ADP, and that study discovered that week three is the informational tipping point. This led to the quip that all of the hard work we put in during the offseason is basically to buy us one extra week before being wrong.)


Now, the format of "Regression Alert" is typically that I identify an area for regression, make a specific prediction, and then track that prediction over the ensuing weeks. There's not really a simple way to make a prediction that will cover all four positions and still be easy to track, but this idea that preseason ADP is still as predictively useful as performance to date is a valuable one, and one I don't think many people realize, so I felt it was important to cover.

Just because I'm taking a break from the prediction racket this week doesn't mean I'm not still on the hook for previous predictions. Here's how things stand on those.

Two weeks ago, I identified a group of running backs with high and low yards per carry and predicted the low-ypc group would outrush the high-ypc group going forward. How do things stand?

Through two weeks, Group A averaged 14.4 carries for 81.8 yards. In two weeks since, Group A averages 15.0 carries for 67.6 yards.

Through two weeks, Group B averaged 16.3 carries for 51.3 yards. In two weeks since, Group B averages 16.1 carries for 54.3 yards.

So far, the volume has stayed relatively constant and the high-YPC group has regressed as expected, but the low-YPC group's yards per carry remains stubbornly low. Yards per carry, however, is famously sensitive to outliers and Group B still has two more weeks to close the gap.

Last week, I looked at receivers who had aberrationally high or low yards per target averages based on the length of their average reception. Then I predicted the low-YPT group would outperform high-YPT group going forward.

Through three weeks, Group A averaged 6.8 targets, 5.0 receptions, and 79.8 yards per game. In week 4, Group A averaged 5.9 targets, 3.6 receptions, and 37.6 yards per game.

Through three weeks, Group B averaged 9.6 targets, 5.8 receptions, and 68.6 yards per game. In week 4, Group B averaged 8.4 targets, 5.0 receptions, and 64.1 yards per game.

That... now that's a result. Not only did Group B trounce Group A in targets, receptions, and yards, they actually beat Group A in yards per target (7.6 to 6.4). I'm not getting cocky about this one yet, because there's still a lot of football left to go.

More articles from Adam Harstad

See all

More articles on: ADP

See all

More articles on: Stats

See all

More articles on: Strategy

See all