Ever heard of Brian Kingman? For decades, he was the answer to a trivia question: who was the last man to lose 20 games in a single season? (He went 8-20 for the 1980 A’s.) While certainly not a pleasant achievement, his unfortunate loss total was hardly atypical for the times. In the previous century of baseball, most seasons saw a pitcher lose 20 or more. It was a rare occurrence for MLB to go more than two years without someone else joining that most unfortunate pitcher club.
However, while it was common in the years before Kingman, it quickly turned out that times were changing. After he lost 20, no one else did so until Mike Maroth with the 2003 Tigers. In between, four different starters—Mike Moore, Scott Erickson, Omar Daal, and Albie Lopez—combined to make eight starts (exactly two by each of the quartet) after losing their 19th game, yet all avoided the unwanted 20th ill-fated decision.
If you think about it, that’s an amazing stretch of not losing by guys who normally lose. They really weren’t the worst pitchers in baseball (if they were, they would’ve been yanked from the rotation long before), but they certainly were far from the best, and even the best generally lose one start in eight.
A recent study reminded me of that incredible good stretch by 19-loss pitchers. Sabermetric researcher Phil Birnbaum recently pondered if pitchers were clutch when aiming for their 20th win. In both an article in SABR’s Baseball Research Journal and a presentation at the organization’s annual convention this summer, Phil noted that in recent decades there are more 20-game winners than 19-game winners. That normally doesn’t happen. More win 16 than 17, 17 than 18, 18 than 19, 21 than 20, and so on. The prized winning total is the only one out of whack.
Phil ultimately concluded that one of the main causes for the 19/20-win blip was how managers used their pitchers (starting them on short rest or putting them in relief for the 20th win). Still, I wonder if anything clutch occurs at the other end of the spectrum: do hurlers give it that much more to salvage some pride and not lose 20?
The post-Kingman era
Let’s start off with the post-Kingman guys. Including Maroth, they started nine games while staring down the barrel of 20 losses. In those contests, they went 4-1 with four no-decisions. In 60.3 innings, they allowed 58 hits (including five home runs), walked 20, and struck out 33. All 24 runs allowed were earned, for an ERA of 3.58. That ain’t bad at all. Safe to say, 3.58 beats their normal ERA.
Let’s look at it a bit more precisely here, though. Below are the per-nine-inning-rate stats for these guys in the key games and overall. In the overall category, I’ll count a pitcher’s season stats twice if he had two starts (which was the case for everyone but Maroth). That may sound odd, but I think it keeps the two samples even. After all, if Mike Moore counts twice in the key games column, then he should count twice in the overall category to keep things symmetrical.
Looking at hits, walks, strikeouts, homers, runs, and earned runs per nine innings, here is how the pitchers did in the key games versus normally:
Rate Key All H/9 8.65 10.62 W/9 2.98 3.20 K/9 4.92 4.96 HR/9 0.75 1.13 R/9 3.58 5.87 ER/9 3.58 5.22
Folks, that’s impressive. There is an across-the-board improvement. It’s negligible with strikeouts, but much of the rest looks impressive. There is a sample size caveat here, but they certainly look to be doing better. While the game’s annual temperature curve should depress offensive output in late September, that doesn’t explain the notable drop in run scoring.
Actually, this sample looks even better if you remove Maroth, who got shelled for eight runs in three innings. Those who stared into the eye of 20 losses and lived to tell the tale saw their normal pitching improve as the following chart shows:
Rate Key All H/9 7.69 10.60 W/9 2.83 3.31 K/9 4.87 5.07 HR/9 0.47 1.07 R/9 2.51 5.85 ER/9 2.51 5.16
Not surprisingly, the disparity in results is even starker. That said, the defense-independent stats really don’t explain a 50+ percent drop off in run scoring. Strikeouts actually drop, walks decline by “only” 15 percent, and while homers do decline significantly, the difference in homers per inning is worth around only a run per game.
Meanwhile, hits decline by almost 30 percent. Exclude homers and in-the-ballpark hits still fall by 25 percent. Is it the pitchers dialing their game up a notch or the defenders not wanting to let their workhorses down?
Or is it luck as the balls land where fielders are? Speaking of luck, none of these rate differences really explain a 57 percent drop in run scoring. Yet the difference between 2.51 R/9IP and 5.85 is equal to that. Look, these pitchers are doing better—no doubt about that—but it’s an open question if they’re the main cause for the improved results.
Personally, I like to think that they’re doing something to be responsible for these differences. That said, I need to acknowledge that might be wishful thinking on my part, and/or an attempt to show that this digging through the data was worth it.
Let’s increase our sample size and see what happens.
“19rs” over the last half-century
Raiding Baseball-Reference, I can find the games by all pitchers with exactly 19 losses since 1954. Let’s look at their games and see what they say. To be more precise, let’s look at their starts.
This is actually an important point to note, given that once upon a time almost all starters were called on to pitch in relief at times. From 1956 to 1965, pitchers with 19 losses combined for nine relief appearances. Roger Craig recorded his 20th loss in relief in 1962, and a few others did likewise in the 1950s.
As a rule, I’m going to ignore these relief appearances in this study. Most occurred when the pitcher really had no chance to lose the game, and they just muddy the waters. Ultimately, there aren’t enough of these games to be a huge concern, but this little wrinkle should be noted.
In all, from 1954 to 2009, pitchers started 79 different times with a chance to lose their 20th game. In those games, the pitchers went 27-33 with 19 no-decisions for a combined personal winning percentage of .450. That’s much better than they normally did.
These pitchers went 1003-1793 (.359) on the year (which double counts the season totals for those with two 19-loss starts, triple counts those with three such starts, and so on. Again, the goal is to make the two samples symmetrical). Subtract their 19-loss starts from the overall records, and the remainder works out to 956-1745 (.354). At that rate, these pitchers should’ve gone 21-39 in their big games, not 27-33.
Then again, pitcher win-loss records are very blunt instruments to evaluating performance. Let’s look at these 79 starts the same way we looked at the post-Kingman guys: compare their performance rates on a per-inning basis in the key games with how they normally did. To keep the two groups being compared symmetrical, if a hurler started three games with 19 losses, his career stats will be included three times in the right column. Here are the results:
Rate Key All H/9 9.23 9.13 W/9 3.06 3.03 K/9 5.23 5.28 HR/9 0.84 0.89 R/9 4.11 4.51 ER/9 3.72 3.95
Well, that’s not exactly the result I expected to see. I thought I’d see them either pitching better resulting in fewer runs or I’d see them pitch about the same, causing the runs allowed to stay where they’d been. Instead, it looks like these men pitched as well and as poorly as they normally did—except it caused opponents to score fewer runs.
I can’t explain this. I have no clue. Their defense-independent stats (walks, strikeouts, and homers) are virtually identical to their normal performance. The defenses behind them actually allow slightly more hits. By all rights, that runs per nine innings, should be right around 4.5. Nope. Keep in mind the sample size is a half-season’s worth of games.
I can think of one thing to tweak. I’m doubtful it will affect the results, but it’s worth looking into. When I look at seasonal info (the all column) I add together raw numbers, and then determine overall per-inning rates. I would be better to figure out the rates for each season and then average them.
For example, in 1979 Phil Niekro tossed 344 innings on the year (including two starts at 19 losses). In 2000, Omar Daal (who also had a pair of 19-loss starts) tossed 167 innings. By adding raw numbers, Niekro counts twice as much as Daal.
This could skew the results because the guys with the most innings on the year should, by and large, be the better pitchers. (Heck, Niekro had a winning record in 1979, 21-20.) If the better pitchers count for more, that would distort the control sample. If Daal threw six innings per start in his 19-loss games (which he did), Niekro can’t go 12 to keep things even. So I’ll look at rates instead for all seasons and average them.
A pitcher’s season will still be double counted if he started twice in these games, but the rates will be counted instead of the raw numbers. Once you make that adjustment, here is what happens to the results for all 79 starts (I’ll leave the raw numbers total in for perspective):
Rate Key All Adj H/9 9.23 9.13 9.18 W/9 3.06 3.03 3.05 K/9 5.23 5.28 5.25 HR/9 0.84 0.89 0.90 R/9 4.11 4.51 4.58 ER/9 3.72 3.95 4.02
All those words for negligible differences. The seasonal performance does decrease (indicating that better pitchers were the ones with more innings; shocking, I know), but the decline isn’t worth much. In most cases, it brings the performance in 19-loss games even closer in line to overall performance.
Most importantly, it doesn’t solve the overall dilemma: why the hell are pitchers allowing fewer runs when everything that causes (or prevents) runs is occurring at the same rate as always? It would be great if I had a brilliant and insightful response to that question. To be honest, I have no idea what’s going on here. Your guess is as good as mine.
The first effort
I can think of one other variable: by looking at guys with just 19 losses, I’m distorting in favor of those who are doing better. A pitcher who loses his first start falls out of the sample size, while those who endure move on. The guys who survive longer are, not surprisingly, the ones pitching better. Only two guys lasted four starts, and both pitched great: in 1973 Gaylord Perry went 4-0 with a 1.25 ERA and nearly a strikeout per innings, and in 1963 Orlando Pena went 3-1 and an ERA of 2.42 in the key games.
Let’s break it up this way: the 79 starts contain 46 games in which a pitcher first efforts, 23 second tries, eight third go-throughs, and a pair of fourth starts. How do they fare when broken up that way?
In the 46 first starts, the pitchers went 11-21, which is essentially their normal winning percentage. Using the same chart shown above, here is what their performance looked liked, compared to how they normally pitched:
Rate Key All Adj H/9 9.29 9.16 9.21 W/9 3.33 3.15 3.17 K/9 5.07 5.17 5.14 HR/9 0.89 0.90 0.90 R/9 4.48 4.60 4.66 ER/9 4.03 4.04 4.10
Well, they pitched about the same. Nothing clutch in the rate stats. Runs are still a tad lower in these games than normally, but the difference is more reasonable. This is the first chart to date that looks normal.
The second effort
Well, what happened in the second starts then? Those guys went 13-6, which is far better than anyone would ever expect them to do. Does better results reflect improved pitching, though?
Rate Key All Adj H/9 8.43 9.23 9.29 W/9 3.12 2.90 2.93 K/9 5.83 5.38 5.37 HR/9 0.83 0.89 0.90 R/9 3.54 4.55 4.64 ER/9 3.12 3.95 4.04
That is a substantial rise in effectiveness. The improvement in runs once again seems out of line, but at least this time there is an improvement in both.
Why would this be the case? Could be sample size. Or it could be one last give-it-all-you-got effort. Even though 17 of these guys survived without losing, only eight had another start. The end was in sight. The first escape might’ve given them some greater hopes, and that carried them through here.
The remaining games are too small a sample to be worth looking at, but if you’re curious pitchers were 2-5 in their third starts, and 1-1 in the finale. Thus from the second start onward, 19-losers were 16-12; not bad.
OK, so are pitchers clutch when trying to avoid the 20th loss? There’s no clear answer, but I have some thoughts.
They allow fewer runs while pitching the same. Go figure that one out. My own hunch is that something is going on during clutch situations of those games causing runs to decline. Normally I’m adverse to psychological arguments, but this does strike me as an unusual circumstance. Players always try their best, but there’s nothing like the threat of imminent humiliation to trigger those brain chemicals and adrenaline glands like the threat of imminent humiliation. Having a runner in scoring position when you’re trying not to lose a 20th game has to grate on a pitcher just a bit more than a normal circumstance. After all, most of these guys have egos.
So why don’t they do better all game long? Pitching is a physically and emotionally exhausting game that takes two to three hours. Emotions can be stirred up when it’s most important: when the humiliation really is imminent.
I think this also explains why 19-loss pitchers did so much better since Kingman. However embarrassing it was to lose 20 in the days of yore, at least it was fairly common. It’s only happened to five men in the 29 years since. You aren’t just losing 20, you’re doing something unknown in a baseball generation.
That said, I’ll acknowledge the above paragraphs mostly reflect my interpretation of the numbers than the numbers themselves. I can’t prove that pitchers are clutch when trying to avoid their 20th loss, but I think there is a little bit of that going on.
References & Resources
Largely inspired by “Players Being `Clutch’ When Targeting 20 Wins” by Phil Birnbaum in the Baseball Research Journal. 38:1 (Summer 2009). I was unable to see his presentation at SABR39 because it was opposite of mine, but he told me afterwards that the clutch targeting was largely a product of managerial usage, and the articles (naturally) makes the same point.
I got the info for this from Baseball-Reference.com.