Baseball Prospectus has published an article by Colin Wyers today that may be one of the most important pieces written about fielding measurement in the last decade. The full piece is available only to BP subscribers, but let me briefly recap some of the topics Colin covers.
Colin reiterates the point that uncertainty in fielding measurements is something that can be tackled with bigger sample sizes, i.e., more season of data. Bias, on the other hand, is persistent. It does not decrease with larger sample sizes of fielding data. He mentions two types of bias: that related to park/scorer and that related to the fielder’s range.
He then outlines a clever method for using data like putouts and assists in order to develop a fielding metric for infielders that should be much less subject to those two sources of bias than our current advanced metrics like Ultimate Zone Rating (UZR), Plus/Minus, and TotalZone. His metric very likely has greater uncertainty than the advanced fielding metrics that use ball-in-play data to determine which fielder had the best chance to field a batted ball. However, at some point, larger sample sizes should decrease the effect of the uncertainty, such that the reduction in bias using Colin’s method will actually produce more accurate measures of fielding. Is Colin’s method better after two seasons? Three seasons? Five seasons? Because we don’t yet know the size of the park-scorer bias or range bias, we don’t know exactly at what point that occurs.
Colin gives some fielding numbers from his system for shortstops, and with them, margins of error! That in itself is a very important advancement. He also shows that the advanced fielding metrics appear to compress the measure of Ozzie Smith‘s fielding value by about 25% over his career.
As I mentioned in the comments to Colin’s article:
Colin, as I mentioned on Twitter, can you use these numbers to estimate the magnitude of range bias for various advanced fielding systems (and at various positions)? Over a large sample of players, the park-scorer bias should become much less important.
If the ~70 run difference for Ozzie Smith is due to range bias, and 1 play = 0.8 runs, and Ozzie played about the equivalent of 17 seasons, then 70 / 0.8 / 17 = about 5 runs per season due to range bias.
If we apply the same method to a large group of players, we ought to be able to estimate the range bias.
Colin showed that the margin of error in his system for a full season of fielding by a shortstop was around 20 runs. Since random errors add quadratically, that means that the margin of error for three seasons of shortstop data would be around 35 runs, or 12 runs per season.
If we guess that advanced metrics can cut this uncertainty in half, that puts them at six runs of uncertainty per season on three seasons worth of data, plus whatever bias they may have. At what sample size does that bias become bigger than the improvement in the uncertainty from using subjective ball-in-play data? It varies according to the player, for one thing, but my crude guesses suggest that maybe for anything in the neighborhood of more than two to four seasons, Colin’s method could be superior.
Not to go too far down that road, because there’s a lot of work to be done yet, but hopefully that shows why I am excited about what Colin has published today.