Radar gun readings: sabermetric building blocks?

Winning is the most important statistic for sabermetricians. Everything that has been done by analysts, even from the early days, has been designed to end in wins. Today, we have a hierarchy of statistics that build up to wins.

A simplified view of things might be:

image

In science, we call this a hierarchical model: first break the problem into levels, learn about what’s happening at each level, and then figure out the relationships between the levels. Sabermetric work can be classified into two categories (and I’m painting with very broad brushstrokes here): collecting the data at each level and figuring out how to get from one level to the next.

I’ve shown the first category in boxes, and the second category with diamonds.

The cool thing is that we’ve got a lot of this figured out. For example, we know now which hitting stats we need to collect, and that information is being recorded at places like Retrosheet or Baseball-Reference. While there are some active debates on the details, we also know how to translate hitting stats into runs scored by using some form of runs created or Linear Weights. In fact, it’s this sort of hierarchical thinking that’s behind the Team Graphs here at THT.

A lot of the great stuff going on in sabermetrics today is happening at the lower levels of the hierarchy. Here’s what I think the picture looks like if we focus on pitching.

image

You can quibble with the details, but it’s certainly true that the relationships here are considerably more complicated and we’re not quite so sure of how to get from one level of the hierarchy to the other. You probably also noticed that the traditional domain of scouts is toward the bottom of this hierarchy. Sabermetrics is starting to encroach on this domain, though, with the great pitch-by-pitch analysis going on here at THT and elsewhere. Using numerical analysis to support scouting is also something that I know at least three teams (three smart, successful teams I might add) have bright folks working on.

Anyway, all this buildup is an excuse to traipse through the pitch data at Dave Appelman’s incomparable Fan Graphs. The data there aren’t only curiosity, they’re part of the hierarchy of baseball statistics. We just need to make use of it properly.

Here’s a simple example: let’s say you’ve figured out that fastball velocity is a key component of some important aspect of pitching—strikeouts or walks or whatever. Then, you’d need to know the “true skill” fastball velocity of pitchers. You might think that it’s as easy as simply measuring how hard a pitcher throws, but it’s not, really. It might sound silly to be regressing something so…what’s the word…physical? as fastball velocity.

But a good analyst wouldn’t take a measured strikeout-to-walk ratio and say that’s the “true skill” strikeout-to-walk ratio. He or she would regress to the mean – always! Fastball velocity doesn’t strike me as being beyond some invisible line whereafter we need not regress.

You can read a little bit about regression to the mean here, but the key concept is that in baseball there is actual performance (sample data) and true skill (regressed data). We need to translate recorded fastball velocities (sample data) to “fastball velocity skill” (regressed data).

There were 90 pitchers who threw at least 100 innings in each of 2006 and 2007 (I’ve omitted Tim Wakefield and his 75 mph fastball in this analysis). The following plot shows the relationship between their average fastball velocity in 2006 and their average fastball velocity in 2007.

image

As you might have guessed, the two track together beautifully. If you knew a pitcher’s fastball velocity in 2006, you could have made a pretty good guess as to what it would be in 2007.

To estimate any skill in baseball, we must always regress to the mean. Regression to the mean is based on two things: what the spread in talent in the MLB population is, and how much data we have (the sample size). In this case, the sample size is “fixed,” so to speak, since we are looking only at pitchers with 100 or more innings pitched in each of 2006 and 2007. So while the following result won’t be general, it will be illustrative.

The linear regression equation is (2007 fastball velocity) = 0.99*(2006 fastball velocity) + 0.87. Stick with me now, I promise the math won’t get too hairy. Among these pitchers, the average fastball velocity was 90.2 mph. So we can recast the equation as:

(2007 fastball velocity) = 0.99*(2006 fastball velocity) + .01*(league-average fastball velocity)

A Hardball Times Update
Goodbye for now.

In other words, if we wanted to predict 2007’s fastball velocity for a given pitcher, 99 percent of our prediction would be his 2006 fastball velocity and 1 percent would be the league-average fastball velocity. This is, of course, only valid between 2006 and 2007 and for pitchers with at least 100 IP. But I’d be willing to bet that a more rigorous study would give us a similar result.

In stat-speak, we would say that we would regress a season’s worth of fastball velocity data 1 percent of the way to the mean. In comparison, you would regress a full season’s worth of batting average on balls in play (BABIP) data almost 85 percent of the way to the mean! After 100 innings pitched, we are very certain about a pitcher’s “true talent” fastball velocity.

This is something of a trivial example, but it does illustrate how we should use pitch data to continue building the hierarchy of baseball stats. And, of course, fastball velocity probably is a very good predictor of some aspect of pitching performance. Just anecdotally, take a look at the pitchers who saw the biggest dropoffs in their fastball velocity between 2006 and 2007: Curt Schilling (-2.7 mph), Edgar Gonzalez (-1.9 mph), Orlando Hernandez (-1.7 mph), Mike Mussina (-1.5 mph). All struggled last year, to varying degrees, with performance and injuries.

So as you continue to enjoy the great work of guys like Josh Kalk, Mike Fast, John Walsh, Paul Nyman, and many others, remember that what they’re doing isn’t just a curiousity. It’s all part of expanding the hierarchy of sabermetrics.


Comments are closed.