Methodology
Most Recent Update 2023.2 - Week 8
Metrics such as opponent offensive line quality and pace of play have been replaced with average opponent fantasy points allowed to DTs for the season. This is still a relatively small part of the model (up to 10% of score) but is a more stable metric that captures the underlying construct I was attempting to measure.
While other tiered measures have been performing on par or better than the raw numbers in terms of correlation with performance, the raw data for run defense grades was far outpacing the two-tiered system I had been employing. I modified my tiers to add another level while increasing the threshold for top points in this category. The net for Week 8 was 25 players seeing a decrease in score with 25 players getting a bump. I will continue to monitor and increment this metric.
Update 2023.1 - 2023 Week 1
For week 1, 2022 season data was used for player grades and snap counts. Snap counts hand modified based on new depth charts. Opponent stats computed manually based on projected offensive line ratings and pace of play.
Actual week 1 snap counts and qualifying rookie performance data will be added to the model week 2. 2022 season player grades will be used for non-rookies through week 3 with some manual adjustments for major risers/fallers.
Update 2022.4 - Week 10
Reduced weighting of both opponent pressure rate and opponent plays per game since neither have been consistent correlates of fantasy output. Will use IOL metrics as tie-breakers within the top 15.
Increased weighting of both PFF Pass Rush and Run Defense grades as both have now consistently demonstrated relationships to fantasy output.
Update 2022.3 - Week 6
While the last update was more to process than substance (i.e. how to handle players coming off of injury) this one is making small improvements to the model itself.
Heavier weighting of PFF Run Defense Grade - back to original levels as roughly 14% of the evaluation after back-to-back weeks of elevated statistical importance
Lowered threshold for PFF Run Defense Grade - when it is significant, this variable has an exponential effect (the top end is the part that is more strongly associated with performance). The inflection point of this curve is lower than expected so I adjusted the cutoff for this to improve a players score.
Added a tier of snap count % - Another exponential effect but this one is more a series of steps than a smooth curve. Above 80% is still the high watermark, but 75-79.9% gets a bump, and anything below 60% (previously below 55%) results in a penalty.
Reduced weighting of opponent offensive line - I am testing some new metrics of O-Line quality but given the current metric (pressure rate allowed) has not yet been significant, I am reducing this to roughly 7% of a player's score (from 14%).
...this one is a little more behind the scenes, but I also slightly adjusted cutoff scores for tiers to better fit the data model. Expect a few less "ideal" plays and a few more "solid (deeper)" in coming weeks.
Update 2022.2 - Week 5
Attempting to project new snap shares for teams with injured players or players returning from injury based on historical data (i.e. the last time Ed Oliver played, the DT snap distribution was...).
Update 2022.1 - Week 3
After conducting some post hoc analyses of a crazy Week 2 for fantasy DTs, I have made the following adjustments to model variables:
Heavier weighting of prior week snap count percentage - this is the only significant variable in the model this early in the year so I'm leaning on it more heavily
Reduced weighting of PFF Run Defense Grade - a good run grade is the icing, not the cake
Reduced weighting of Opponent Average Offensive Snaps - too much variability this early in the year...maybe all year.
Greater tiering of PFF Pass Rush Grade and Snap% - tiered scores outperformed continuous variables in the model but needed further specification
Lowered advantageous matchup threshold - an exploitable matchup only matters if the players has the talent to exploit it. That talent level was lower than expected.
Origin and Methodology
I was inspired by Johny The Greek’s excellent series "Cornerback Corner" on IDP Guys to see if I could successful 1) predict DT output on a week-to-week basis and 2) demonstrate that streaming DT (like CB) is a viable strategy. I use a variety of player (run defense and pass rush grades from PFF, snap shares from FantasyPros and FootballGuys) and opponent (pressure rate allowed from PFF, opponent plays per game from TeamRankings) data to categorize players into 4 categories: Ideal, Solid, Solid (Deeper), and Avoid. True position was used to determine who is a DT and NPLB scoring was used to assess points.
I am constantly iterating my model to be more successful, but here is the rundown of how I performed in year 1 (13 weeks):
2021 Summary
"Ideal" plays scored twice as much as "Avoids" and put up 0s only 2.24% of the time
You are roughly 2X as likely to hit >10 pts with "Solid" plays compared to "Avoids", 4x to hit >20 pts
At this point you might be thinking 1) well, you are just putting the top projected DTs in your top groups and/or 2) there is no way I can stream DT. Well, I have some metrics here as well.
As seen below (left), I compared my "Ideal" tier against FantasySharks weekly point projections. My selections met or exceeded these projections roughly 55% of the time, my ideal plays at least doubled projections roughly 20% of the time, we were both wrong (these players only put up 0s) only 3% of the time.
Below (right) is the point distribution of my weekly streamer duo (my top two rated players widely available in 12 team leagues). These 'under the radar' players scored an average of 13.84 points season-long players hit over 20 six times and only scored less than 7 on three occasions!
"Ideal" tier vs Fsharks Projections
Weekly streamer point distribution
But enough about the past...once the season starts, this is where you will find a weekly DT Deep Dive to help with your starting and streaming decisions for up to 150 defensive tackles! Stay tuned.