Baseball Think Factory regular and professional pitcher Carlos Gomez (aka “Chad Bradford Wannabe”) is responsible for bringing to my attention the most shocking, stupefying stat I have ever seen in all my years of following baseball. One day last year, around playoff time as I recall, he mentioned to me that less than 8 percent of first-pitch strikes turn into base hits.

Say what now?

It’s true, actually. The figure for 2003, as I calculated from some wonderful data provided by Tom Tippett at the Diamond Mind Weblog, is 7.3%. Those figures do not include pitchers hitting; presumably if you throw in the pitchers as well it would be a couple tenths of a percent worse for the hitters.

That’s just shocking. 92.7% of the time, if you throw a strike to the opposing hitter, you get either a 0-1 count or an out.

Now if the hitter manages to hit your first strike in fair territory, he does pretty well, as hitters batted .341/.348/.555 on 0-0 counts in 2003. Overall, these hitters (i.e. all major league non-pitchers) hit .268/.337/.430. So making fair contact on 0-0 generates some pretty good results!

All of this leads to some baseball people shying away from counseling a truly aggressive approach by pitchers. In fact, these sorts of numbers, and the resulting anecdotal evidence that “first-pitch hitters” do well, tend to lead to hitting coaches to encourage their hitters to jump on the first pitch, because hitters generally do so well when they hit that first pitch (that’s hit the first pitch, not swing at the first pitch – ideally, a follow-up study to this one will focus on hitters, incorporate swinging strike/called strike data, and look at where hitters could benefit from being more or less patient).

But more than three-quarters of the time, that first strike from the pitcher is taken, swung through or fouled off. So in fact, looked at carefully, the pitcher still retains a massive advantage when that first pitch is in the strike zone. Why? Because once a pitcher gets to 0-1, hitters hit just .239/.283/.372 against him from there on out.

What Tippett’s terrific table (say that three times fast!) doesn’t do is to combine the analysis of first-pitch strikes (i.e. where the ball is put in play on 0-0, and where the count goes to 0-1). But it’s instructive to do so, so let’s compare the results of first-pitch strikes versus first-pitch balls…

0-0 Strike .261/.296/.411 0-0 Ball .280/.385/.459 (ignoring intentional walks, which start out 1-0 97.3% of the time)

And that is a pretty substantial difference. Let’s imagine that we have two pitchers, both of whom are otherwise perfectly average but one of whom always throws a strike on the first pitch, while the other always throws a ball. The first pitcher, the “strike one” pitcher, has an expected ERA of about 3.60. The second one, the otherwise perfectly average one who always throws a ball on pitch one, has an expected ERA of about 5.50. He’ll also pitch about 12% fewer innings (without taking into account the higher pitch counts that would result from starting 1-0).

Now there are some limitations to this data; it doesn’t include hit batsmen (which would be a category all their own) and more crucially, it doesn’t take into account pitches out of the strike zone that end up as swinging or foul strikes. But it is instructive on the massive difference between the first strike and the first ball. Put another way, I constructed a set of custom linear weights (using a method shown to me by Tangotiger) that took into account only unintentional walks (since IBBs almost always go 1-0, 2-0, 3-0) and ignored all other events as well. The result? The expected runs produced from each plate appearance beginning with a strike decreases by .029. The expected runs produced from each plate appearance beginning with a ball *increases* by .040. So that’s a difference of .069 runs on the scoreboard, from one pitch.

Now there are also additional benefits to throwing first-pitch strikes, other than just those bare results. Strikes also end plate appearances early; therefore preserving a pitcher’s freshness and allowing him to face more hitters. Johnny Sain, the legendary pitching coach who has left dozens if not hundreds of disciples throughout baseball, said that the best pitch in baseball was the one-pitch out. Certainly the ideal result for a pitcher isn’t the strikeout, but the one-pitch out. Even putting that to one side, though, surely the notion that strike one (no matter if it’s put into play, swinging, called, foul, or what have you) is worth two runs a game is enough to encourage pitchers to throw first strikes more often? (Pitchers threw first-pitch strikes 57% of the time in 2003). Now if that perfectly average pitcher threw first-pitch strikes 80% of the time instead of 57%, his ERA would decrease by about 0.64. If every pitcher on a team did it, it would save that team about 100 runs a year, or ten wins, turning average teams into pennant contenders.

All this got me thinking about what the benefit of getting a strike versus a ball is at each of the counts. I need full pitch-by-pitch data to establish this properly, particularly because of the foul strikes on 2-strike counts, but I have compiled an estimate of it for each count. This data won’t be strictly accurate, because it separates each count from its context (it’s different to go to 3-2 by ball, ball, ball, strike, strike than it is by strike, strike, ball, ball, ball but my method treats them as equivalent) but it gives a general overview of what can be expected at each count. As a result, all these are approximate numbers because I don’t know how many 1-1 counts come from being 1-0, and how many from being 0-1. If you don’t understand the importance or implications of that detail, don’t worry, because it can’t possibly be a very large difference. So the data I’m now compiling isn’t perfect, but it’s close to right.

I’ll present that data, which I’m still trying to tweak, in an article next week. If you want a sneak preview, I can only say that the amazing thing about the data is that the third *smallest* difference in outcome between a ball and a strike occurs on 0-0. (Throwing a strike on 0-1 and 0-2 is less advantageous). The biggest difference, of course, is on 3-2, where throwing a ball instead of a strike costs you well over half a run (remember, the effect of foul balls is not counted here, which would reduce the advantage a fair way).

This is not to say that the 0-0 strike is unimportant; throwing a strike is always important. Nor is it accurate to say that it’s less important than on other counts – since every at-bat has a 0-0 pitch, the cumulative effect of all 0-0 pitches is greater than that of any other count except 3-2. But the count (other than 0-0 and on 3-2, where more analysis is needed to account for foul ball problems) where the most difference can be made, the place where pitchers simply throw too many balls and not nearly enough strikes, is 2-2. Pitchers quite simply appear to cost themselves huge numbers of runs throwing too many balls on 2-2 and trying (presumably) to get hitters to chase. The difference in ultimate results between 2-2 and 3-2 is *so* massive, and there are a fair number of 2-2 counts. As a result, consistently throughout 2003, pitchers were eventually punished by sitting back, pitching conservatively and letting the hitter back into the count at 2-2. At any rate, all this and more will be covered in Part Two.

So until next week, remember to stop trying to strike everyone out… get some ground balls. It’s more democratic.

**References & Resources**

I’d like to thank Carlos Gomez for inspiring this article and providing its central, motivating idea. Also Tom Tippett, for publishing the data that made this analysis possible.

Ted said...

Is the combined analysis of 0-0 strike vs 0-0 ball using data from 2003? I’m referring to

0-0 Strike .261/.296/.411

0-0 Ball .280/.385/.459 (ignoring intentional walks, which start out 1-0 97.3% of the time)

Would appreciate your reply!