Helping their own cause

Pitchers taking their turn in the batting order is, at least for now, still a part of the game of baseball. It’s a widely dismissed and neglected part of baseball, but it oughtn’t be. The at-bat a pitcher takes counts just as much as that of the hulking clean-up man, and the spread of batting talents that pitchers have affects the final outcome of the games, just as the different batting skills of any other position do.

It seems obvious that a capable batter should produce more wins than one for whom taking four pitches to strike out, or getting a bunt to stay fair, is a moral victory. Still, we almost never hear about the contribution that such a pitcher makes to his team. Carlos Zambrano produces an OPS of almost .900 for a division-winning Cubs outfit, and it takes no attention away from his legendary volatility. CC Sabathia shows some chops with the bat, but spends almost all his career in the American League, NL teams apparently seeing no percentage in the added dimension of his game.

I’ve given a look to pitcher batting before, identifying the teams that got the most offense from their moundsmen. This time I’m going to start with the individuals, but to answer a more general question: How much does the batting ability of a pitcher affect the won-lost record that he, or his team, compiles?

The method

For this study, I chose pitchers between the years of 1947 and 1971. Before 1947, we don’t have records for the OPS+ accumulated against pitchers, a measure of effectiveness I decided to use in place of ERA+. (I also wanted to avoid the World War II years as unrepresentative, including the transitional year of 1946.) The later boundary is determined by the advent of the designated hitter in 1973, though I stopped a year earlier to avoid the strike season of 1972, in perhaps an excess of caution with the data.

Pitchers in the study had to have been primarily starters, starting at least 60 percent of their games, and had to have qualified for the ERA title by pitching at least one inning for each game their teams played. Starters got the bulk of decisions in those days, and had the only real opportunities to collect meaningful numbers of plate appearances.

While counting all of their plate appearances, I counted only decisions they received for their starts, not for games in relief. Bullpen stints warp the won-lost numbers: the pitchers’ teams went a combined 761-1,342 in their relief appearances for the player years I surveyed. Back in those days, using a reliever, especially one who generally started, correlated much more closely to losing the game than it does today. Also, there is a good chance a pitcher would not have batted in a relief situation, undercutting the whole point of looking at the games.

I also selected pitchers at varying levels of pitching performance. I originally sampled them at 10-point increments of OPS+ against, but I expanded the sample to include qualifying pitchers within one point of every five-point increment from 75 (good) to 125 (bad). I did this in hopes of finding whether good pitchers also tend to be good hitters, or whether concentrating on one detracts from the other.

All told the survey includes 467 pitchers’ seasons, throwing over 100,000 innings and receiving 38,690 plate appearances.

I normalized their batting to the league OPS+ for pitchers in their respective years, as a base from which to calculate offensive value. (WAR is not only a relatively crude measure, but includes non-batting actions like baserunning.) I calculated their level above or below league pOPS+, multiplied that by the plate appearances they had, and divided by 100 to scale the numbers to something easier to grasp. The resulting number I have labeled pOPS Points.

To show how this works, take Warren Spahn in 1958. His .333/.381/.463 slash line produced a 131 OPS+, compared to the major league pitcher average of 18. He gets 113 points above average, times his 122 plate appearances, divided by 100 for scale to give him +137.86 pOPS Points. This is the best score in the survey, and easily Spahn’s best offensive performance; his second-best batting season produced “only” a 91 OPS+.

On the opposite end of the scale, I’ll take Chris Short in 1970. His -69 OPS+ was 77 points below the major league average of 8. Multiply that -77 by his 73 plate appearances, divide for scale, and you get -56.21 pOPS Points. This is bad, but not the worst for the pitchers I examined. (Tell Ron Kline I said hello.)

The preliminaries

For my first result, I give you the intersection of hitting ability with pitching ability, as opposed to pitching record. My peculiar manner of choosing eligible pitchers by oOPS+ shows up in the striping, but doesn’t really affect the quality of data. What we learn here is …

image

… um, nothing. Which is to say, the trend line is almost totally flat (look closely, you can just make it out), and the correlation is non-existent (an r^2 of 0.00064). A better pitcher shows no sign of being a better, or worse, hitter. One could conjure plausible causations for either: natural athletic ability creating a positive correlation as it improves both activities, or stronger concentration on improving one’s pitching leading to even greater neglect of bat work. Perhaps neither matters; perhaps they cancel out.

The only faint suggestion that this might not be so is due to the smaller sample sizes for the poorer pitching levels.

oOPS+  No. of seasons   PA   Ttl. pOPS Pts.  pOPS/100 PA
  75         45        3798        3.46         0.091
  80         55        5147      276.84         5.379
  85         75        6393      614.1          9.606
  90         70        5825     -243.25        -4.176
  95         74        6166     -172.59        -2.799
 100         51        4083       94.34         2.311
 105         39        2980     -222.97        -7.482
 110         21        1586     -103.52        -6.527
 115         18        1277       98.91         7.745
 120         12         880      -72.4         -8.227
 125          7         555      205.48        37.023

The sub-100 oOPS+’es tend toward good batting, while the 105-120 ones lean more strongly to bad batting, but with fewer seasons in the mix. The 125 oOPS+ results are a serious outlier, flattening the overall trend line. If I had instead taken X number of seasons from each bucket, I might have different results. I am loath to throw out data, though, so I’m stuck with the natural dearth of starting pitchers who threw poorly for a large number of innings.

There may be some lesson in those seven 125-oOPS+ seasons, six of which have above-average batting and two of which—Earl Wilson 1964 and Don Newcombe 1958—are truly outstanding, better than league average for all hitters. It may be that one of the few things that could keep you in the rotation during a bad year was a strong bat to compensate a little. (It probably didn’t hurt Wilson and Newcombe that they had historically great batting careers for pitchers, so managers may have been counting on at least that aspect of their game to hold up.)

I have one other piece of trivia before my main finding. Taken all together, the 467 seasons I studied produced a cumulative 524.6 pOPS Points. Working backward and using the plate appearance numbers, this comes out to my studied starting pitchers hitting 1.36 points above the average pitcher OPS+ for their times. With nearly 40,000 PA involved, this is likely a real, if small, margin.

Reasons suggest themselves. My preferred explanation is that the starting pitchers have a natural advantage in batting over relievers. They would often get to bat against opposing pitchers more than once, and we know now that batters improve their performance each time they face a pitcher in a game. Relievers had far fewer opportunities for that from 1947 to 1971 (and the number would be vanishingly small today).

I would like to believe that there was some deliberate sorting involved, that to some extent good batters got the chance to throw long and bad ones were relegated to the bullpen, at least as a tie-breaker when choosing between similar pitching skills. Liking to believe it and having any evidence at all to back it up are two separate things, and I don’t think I have the latter.

The main event

Now I tackle the original question, whether and how much a pitcher’s batting improves his record and the team’s. Batting, as stated, is measured by pOPS Points. For the records, I went with a simple wins minus losses, giving us the number of wins above or below .500 that a pitcher, or his team, enjoyed in his starts.

The charts follow, putting the batting against both the pitchers’ decisions and the team’s won-lost records in their starts. Note the correlation coefficients included on each chart.

image
image

On both charts, the best record belongs to Whitey Ford in 1961. He went 25-4 in his starts, and the Yankees went 34-5. The worst personal performance is a pretty famous one, too: Don Larsen in 1954. He went 3-21 with the Orioles that year, but one loss was in relief, so here he is only 3-20. The worst performance for the team is another well-known one: Roger Craig for the 1962 Mets. They went 7-26 when he started—but in his nine relief appearances, they went 7-2! (Craig got a 4-2 out of them.)

In each case, the trend line shows a slight correlation between good batting and good results in decisions. The relationship is a bit stronger for player record than for team record. That seems natural: no-decision results, by definition, have moved out of the pitcher’s control.

By no means are these strong correlations. The coefficient of determination (r^2), showing how much of the records is due to the batting performance, is just 2.4 percent for pitcher records and 2.0 percent for team records. That’s a small piece of the game.

In a broader sense, of course, this is right. Offense is only half of the game, or maybe a bit less, and the pitchers were only one-ninth of their teams’ offense, for as long as they stayed in the games. Pitchers in the era I studied had roughly two-thirds the plate appearances of the average position, and some of those PAs would have gone to relievers. On top of all that are the sacrifices the pitchers would have been making: Sacrifice hits would not register at all in OPS.

Divide and sub-divide the responsibility for wins and losses that way, and it gets close to matching the little percentages I calculated through my spreadsheets, though by my reckoning it’s not quite there. It is reassuring to have the rough numbers confirmed by another method. Still, the importance of pitcher batting to their own records, or those of their teams, is not as great as one might have thought, or hoped. I cannot pretend that two (or two and a half) percent of the game is the secret pivot on which all the results turn.

I find myself making a comparison of this aspect of the game to another one: base-stealing. I think there are hidden possibilities in both; I have noted before at THT that I think the double-steal and the swipe of home are two of the most under-rated tactics in the game. This opinion does not alter the fact that stealing is just not that important to the overall game. Likewise, much as I may think pitcher batting holds potential for savvy exploitation, it does not offer that great a reward.

There are certain limits to how much it could be exploited anyway. There is effectively no lower bound at which a pitcher bats too badly to play in the majors. At the upper extremes, though, you reach the Babe Ruth problem: There are players too good with the bat to remain pitchers. They’ll be put in the field to get the full effect of their offense. Today, this decision is made at levels much lower than the majors, or very occasionally when the equation changes on the pitching end (as it did with Rick Ankiel).

The bounds are real, but a thin percentage remains a percentage. I’m not aware of any teams of the past paying particular attention to how their pitchers batted and making hay of it, but that doesn’t bind the teams of today. I know at least one team famous for looking for that extra two percent—but unfortunately, it’s in the wrong league to make anything of this most of the time. (There are interleague games.)

Consider it another market inefficiency, one of those edges hiding in plain sight. Even today, in an age that thoroughly despises and discourages pitcher batting rather than just ignoring it as earlier times did, there is a gain to be realized. With the escalating arms race between teams searching for those gains, perhaps some enterprising club will start making something of this.

Or, just maybe, they are already and are keeping it hush-hush. Do I have my next article concept here? Watch this space …

Print Friendly
« Previous: Hub fans bid Kid redo
Next: Fantasy Waiver Wire: Week 11, Vol. II »

Comments

  1. Metsox said...

    Travis Wood’s 296/345/556 triple slash is coughing quietly in the corner. Posted a 146 WRC+ ytd.

  2. AndrewJ said...

    Many years ago (pre-sabermetrics) someone theorized that Warren Spahn owed about 50 of his 363 career victories to the fact that he wasn’t lifted for a pinch-hitter late in games, thus getting the “W” in come-from-behind games which might have otherwise gone to a reliever. not sure how that holds up.

  3. Shane Tourtellotte said...

    Glad to hear about him.  Two little problems, though.  First, the dreaded small sample size:  28 plate appearances doesn’t establish very much. (All pitchers today have this problem, of course, and it wasn’t that much better in earlier eras.)  Second, given where the Cubs are currently languishing, Wood’s not likely to get very much attention for his batting even if he maintains the pace.

    But he does help give the Cubs’ pitching staff an OPS+ of 44, against a current NL pitchers’ average of 0.  I may start a bit of digging here …

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current day month ye@r *