Alphabet Soup: Getting Defensive

I’ve always tried to be honest with you guys, I really have. We’ve been through a lot together: finding out that Batting Average and ERA are terrible; learning about ballpark factors; Carlos Pena. So that’s why I’m gonna be straight with you now.

Defense is really hard to measure.

Thing is, pitching and offense are pretty discrete events with pretty binary outcomes. Not entirely, or we wouldn’t even be here in the first place – but compared to defense, pitching and offense are a lot easier to quantify. With defense, you’re left wondering what didn’t happen, instead of measuring what did. There are a lot more variables in play, and a lot more interaction between those variables.

All that “traditional” statistics really offer us on the defensive end are errors and the related fielding percentage. When we’ve talked about pitching, we’ve talked about why errors are minimally useful: they’re subjectively assigned, and they can disproportionately punish talented fielders who get to more (and more difficult) balls.

Fielding percentage is the ratio of successful ball-handling opportunities (putouts and assists) to total chances (putouts and assists and errors). So it’s pretty much entirely predicated on not making errors, which a defender can easily do by just not attempting risky or questionable plays. And I think we can all agree that a statistic that rewards players for not trying is probably not terribly useful.

The current state of advanced defensive metrics is really pretty impressive. There are two main systems for measuring defensive contributions: Ultimate Zone Rating (UZR), and Defensive Runs Saved (DRS). They use different methods, but each tries to break down the aspects of a defender’s game and assign a run value to each (remember run expectancy?). The end result is expressed in runs prevented above or below average.

We’ll go into more detail on those next week. But while they’re very good measures of a player’s skill in the field, they’re still not perfect. Fielding is arguably the area in sabermetrics with the most room for improvement, which is why it’s particularly important when talking about defense to cross-check your numbers and be aware of your sample sizes.

When examining defense, I’m very cautious when using less than three years’ worth of data. I always check both UZR and DRS, to see whether they agree on a player. If they don’t (and even sometimes when they do), I check video and seek out analyses from more experienced evaluators. These numbers are extremely useful, but only if one stays keenly aware of their limitations.

So with all that said, next week we’ll dig into how UZR and DRS work, what they measure, and why they’re different. If you’ve got any particular questions that you want me to address, leave ‘em in the comments. After all, I would never lie to you. Not after all we’ve been through. Especially Carlos Pena.

Leave a Reply