A couple of weeks ago, we looked at DRS and UZR, two slightly different measures of defensive capability. Though they’re calculated in different ways, they both measure more or less the same thing, so it’s useful to be able to look at both and compare.
Though they have their differences, DRS and UZR have one key thing in common: they’re both measured in runs above or below average. This means, essentially, that they tell us how many more runs a player saved defensively than Joe Average.
That’s really useful, if you think about it, because it means we can stack a player’s defense up against his production with the bat and see how he does. But in order to do that, we have to have an offensive statistic that measures runs above or below average. So join me, fellow travelers, on a trip to the past; to the olden days, when we were quantifying offensive production…
We have a few pretty great overall offensive numbers. The first was wOBA, essentially an on-base percentage calculated using run expectancy. Then we talked about wRC, which tallies up all the runs that a player created. Its cousin, wRC+, represents that same information with league average as 100, and each point above or below as a percentage point above or below average.
But none of these actually measures runs above or below average. To do that, we need to add another weapon to our arsenal: Weighted Runs Above Average (wRAA). This number is pretty much exactly what it sounds like. It uses run expectancy, in the form of wOBA, to tally up the runs above or below average that a player contributed with his bat.
This means it compares pretty well with UZR and DRS, though there are a LOT – let me repeat that, for emphasis: A LOT - of limitations to this comparison. For one, wRAA is league-adjusted, but not park-adjusted, while UZR and DRS are.
Additionally, while wRAA may not have enough data to be predictive at smaller sample sizes, it’s still indisputably a record of what happened on the field (viewed through the lens of run expectancies). UZR and DRS don’t work that way. For a detailed explanation of why, check out this section of FanGraphs’ UZR primer.
What it basically boils down to is this: each play tallied by defensive statistics is also adjusted for difficulty. Basically, there’s a value judgment on each play as well as a binary did/didn’t-he-make-it judgment. Imagine that you’re judging a player’s defense on a single play, which you know to be (on average) very difficult. He makes the play in this instance.
UZR would value him very highly, but it’s only one play, and there are any number of reasons he might have made it this time. That high UZR value is neither a record of what happened on the field (it’s only one play, but in UZR, it carries a difficulty adjustment) nor a player’s true talent level (he made a difficult play this one time; we don’t know how often he’d make it, or other difficult plays, on a regular basis).
So, basically, what I’m saying is what I’ve been saying all along: MORE DATA, PLEASE. The more, the merrier. And if you’re dealing with small sample sizes, assume that your “true talent” number is probably somewhere in between the number you have and the mean (in this case, zero).
We’ll talk a bit about regressing and look at some actual numbers next week. See you then!