Lose a tick, gain a tick

image
In 2009, Justin Verlander regained the zip on his fastball and his dominance on the mound, striking out 269. (Icon/SMI)

Major league pitchers throw hard. A typical major league fastball takes about four-tenths of a second from leaving the pitcher’s hand to crossing home plate.

But what is “typical”? A fastball from 47-year-old Jamie Moyer buzzes in at around 81 miles per hour (still much harder than I can throw at almost 10 years his junior). Jonathan Broxton pumps in the heat at 97 mph. What would happen if you gave Broxton’s 97-mph hard stuff to Moyer and Moyer’s well-located “fast” ball to Broxton? Would Moyer become dominant? Would Broxton be out of baseball in short order? That’s an extreme theoretical example. However, pitchers deal with smaller fastball velocity changes all the time.

For example, Madison Bumgarner’s loss of velocity is one of the big stories out of spring training. His 93-mph fastball velocity went missing some time last summer, replaced by a 88-90 mph version. Everyone seems to agree that’s not good. Some people hope his velocity will eventually return; others seem to be readying their typewriters to punch out his baseball obituary. But what’s reasonable to expect from Bumgarner if his earlier velocity never returns? What does the loss of 4 mph mean for Bumgarner? For the typical pitcher?

This is a question that comes up frequently, albeit in various forms, depending on the pitcher and the circumstances surrounding his velocity increase or decrease. People wonder if Brett Anderson will retain his faster fastball that he found midway through 2009 and how much of his improved effectiveness was due to the extra velocity. Does Tim Lincecum’s loss of a mph or two in 2009 bode ill for his future? It certainly seemed to do him no harm last year. When Justin Verlander and Barry Zito regain a little of their old zip, is that why they turned in improved campaigns in 2009?

After repeatedly encountering questions like these, I decided to investigate further. Both common sense and a cursory look at the numbers tell us that pitchers who throw harder give up fewer runs. However, quantifying that effect turns out to be harder than it might seem at first glance.

Performance vs. velocity

The approach that Chris Quick used in his article at Bay City Ball is a good place to start. If one takes the BIS fastball velocity data from Fangraphs for every starting pitcher and every season 2002-2009 in which he pitched at least 100 innings and charts his runs allowed per nine innings (RA), one gets a graph that looks like this:

Starters RA vs. fastball speed

There’s a discernible trend for lower runs allowed in pitcher seasons with higher fastball velocities, albeit barely discernible. One could draw a best-fit linear regression line through the data and report that RA drops by 0.09 for each increase of 1 mph in fastball velocity, with an R-squared of 0.05. Weak trend quantified. Close the books, we’re done. Right?

No! If you wanted to increase the R-squared, you could do what Chris Quick did and use the Fielding Independent Pitching (FIP) ERA instead of actual runs allowed (although he should have used fastball velocity rather than FIP as the independent variable in his graph). The slope of the best-fit line stays about the same, -0.10 per mph, but R-squared goes up to 0.13.

We need to be careful here about something that often trips up analysts. The R-squared number is not appropriately used to tell us whether the relationship between our two variables is strong, weak, or nonexistent. It tells us how much of the variance in RA in our sample is explained by fastball velocity. The sample we choose affects R-squared. In addition, we know that there are many other things that affect a pitcher’s RA besides velocity–his ability to locate his pitches, the deception on his change-up, the break on his curveball, etc. Thus we’re appropriately more interested in the slope of the line than the fraction of the sample variance that we can explain.

So we see that this initial approach is fundamentally limited, both by the blunt tool of best-fit linear regression and by the nature of our data. The best-fit line does a fair job of telling us that there is a trend in the data, but it does a poor job of providing an accurate quantitative answer to our question about what happens to a pitcher’s effectiveness when he loses or gains speed on his fastball.

A more revealing approach

Perhaps my favorite tool for attacking problems like these is what Mitchel Lichtman has called the “poor man’s regression.” Let’s divide the pitcher-seasons into bins based on similar fastball velocity. With this method, we can include every starting pitcher, weighting their contribution by playing time. We can also apply this method to relief pitchers.

RA vs. fastball speed by bins

Here we get a better picture of how major league pitchers’ run averages compare to average fastball velocity. Run average is pretty flat for starters up to around 89 mph and for relievers up to about 91 mph. Then at higher velocities, RA decreases by roughly 0.20 per additional mph. Why would a pitcher’s effectiveness be insensitive to fastball velocity at lower velocities?

The most obvious explanation is that our sample–major league pitchers–is not independent of fastball velocity. Pitchers who can’t throw 80 mph don’t play in major league baseball at all (unless they can float a knuckleball like Tim Wakefield). And pitchers who can’t top 90 mph don’t play in the majors unless they have other skills that help them compensate for a slower fastball. Pitchers with slow fastballs and poor effectiveness get weeded out of our sample. This selection bias makes it difficult to trust our quantitative estimates of RA changes due to fastball velocity changes.

Let’s try a different approach that examines our question more directly. What happens to a given pitcher’s effectiveness when he loses a mph off his fastball, or adds a mph? We can look at how pitchers fared from season to season compared to changes in their fastball speed between those seasons.

Starters change in RA vs. change in fastball speed

Again, a slight trend is discernible in the direction we would expect–if a pitcher increased his fastball velocity from season one to season two, his run prevention improved, with the best-fit line at -0.24 RA per 1-mph increase (and an R-squared of 0.03).

If we return to our previous method and divide the pitcher-season pairs into bins based on similar change in fastball velocity between seasons, we can gain some confidence that we’re not just looking at noise here.

Starters change in RA vs. change in fastball speed by bins

The best-fit line here has a slope of -0.28 RA per one-mph increase in fastball speed, similar to what we saw in the overall graph. Here I’ve included all starting pitcher season-pairs in the sample here rather than the cutoff I used in the first graph of a minimum of 81 innings in each season in the pair. I weighted the contribution to the sample based on the lesser of the innings pitched between the two seasons. In addition I adjusted each pitcher’s run average to account for changes in the MLB run average by season. (More on that in just a moment.)

We can also do a similar chart for relief pitchers.

Relievers change in RA vs. change in fastball speed by bins

The first thing to notice is that the performance of a reliever appears to be more sensitive to changes in fastball speed than for starting pitchers. Relievers improve about 0.45 in RA for every one-mph increase in fastball speed. This probably shouldn’t come as a surprise. For one thing, relievers throw slightly more fastballs than do starters, 65% to 62% according to BIS. Moreover, accepted baseball wisdom tells us that relievers don’t typically have as many off-speed and breaking pitches that are up to the quality of their fastball. After all, that’s why they are in the bullpen and not tasked with going through the lineup three times like a starter. In this study, I did not investigate that theory. However, assuming that it’s true, if relievers are more reliant on their fastballs, both in terms of overall percentage thrown and as their go-to pitch in key situations, it makes sense that they would see more effect from changes in fastball speed.

Average major league velocity

Now back to the digression I promised earlier. When calculating the change in a pitcher’s RA between seasons, I adjusted for the change in MLB RA between seasons. It doesn’t make much difference in the numbers we are interested in, but it seemed like the right thing to do. I could have taken that a step further and adjusted for changes in ballparks for pitchers who switched teams, or any myriad other factors. Given how little change I saw from making the league adjustment, I didn’t feel any further adjustments were worth the effort.

However, considering what changes might occur in the league from year to year led me down another path. It may not seem relevant to our question at first, but it is interesting, and I think it ultimately proves relevant as well. Have you ever wondered how the average fastball velocity in MLB has changed over time? It turns out that it has been rising, and rather surprisingly so in the past couple years, at least according to the BIS data. The average MLB fastball was 89.9 mph in 2002, 90.2 mph in 2007, and 91.2 mph in 2009.

I’m not the first person to notice this. Dan Novick wrote about this in February, and he was skeptical that the recent rise in speed was real and not just a measurement issue. Several commenters suggested possible sources of measurement problems. One suggestion was that BIS was increasingly identifying cut fastballs separately, whereas in earlier years they were more likely to be lumped in with regular fastballs. This turns out to be true, although the overall effect is relatively small. When we include cut fastballs, the average MLB fastball was 89.9 mph in 2002, 90.1 mph in 2007, and 90.9 mph in 2009, as recorded by BIS.

Average MLB fastball speed by season

Another suggestion was that the introduction of the PITCHf/x system in 2007 and the increasing use of PITCHf/x instead of radar guns for pitch speed displays on stadium scoreboards might be corrupting the BIS data. This assumes (1) that PITCHf/x is providing pitch speeds that are noticeably higher than the radar guns that were previously used and (2) that BIS is taking their speed readings from a source, such as the stadium scoreboard, that comes from PITCHf/x. The second assumption seems plausible, although I don’t know either way. But I was immediately skeptical of the first assumption for several reasons. First, Sportvision and MLBAM settled on reporting the pitch velocity from PITCHf/x at 50 feet from home plate because they determined that the speeds they measured at that distance best corresponded with the speeds reported by radar guns. My anecdotal experience tends to agree with their evaluation. Second, only some teams use the PITCHf/x data as the source for their scoreboard displays, so to have a league-wide effect of 1 mph would require an increase in the affected stadiums of more than 1 mph. Nonetheless, I decided to investigate this theory.

I looked at the change in fastball speed for starting pitchers from 2007 to 2008 and from 2008 to 2009, grouped by team. The deviations by team largely fell within +/- 0.5 mph. If there was a reporting bias due to PITCHf/x, one would think that the increase for the teams displaying PITCHf/x would be higher than the league increase of one mph. In fact, those teams known by me to be displaying PITCHf/x speeds did not show any tendency to be at the higher end of this distribution. Moreover, the average increase in fastball speed for the whole group was slightly below zero!

Wait, didn’t I just say that the average fastball speed for the league had risen by about 1 mph between 2007-2009? Yes, I did. But as Dan noted in his article, “The identities of the pitchers in the sample change from year to year.” That turns out to make a great deal of difference. When you hold the identities of the pitchers constant, average fastball velocity stays pretty close to flat from year to year. This is true for both starters and relievers.

Why, then, did the league’s average fastball speed jump so much in the last couple years? The answer turns out to be in the pitchers entering and leaving our sample each year. Starting pitchers dropping out of our sample averaged 87.8 mph, while starting pitchers entering our sample averaged 89.7 mph, until 2009 when the starting pitchers entering the sample averaged 90.4 mph.

The 2009 season saw a bevy of new hard-throwing starters: Felipe Paulino (95 mph), Jordan Zimmerman (93), Vince Mazzaro (93), David Hernandez (93), Brett Anderson (93), Derek Holland (93), Tommy Hanson (92), Jason Berken (92), and Ricky Romero (92). On the other side of the ledger, a number of softer-tossing starters left the league after 2007 or 2008: Tom Glavine (82 mph), Greg Maddux (84), Mark Redman (84), David Wells (85), Kenny Rogers (85), Steve Trachsel (85), Mike Mussina (86), and Matt Morris (86).

We could follow this tangent and investigate aging curves for fastball speed, but Jeremy Greenhouse has already done a very good study of that at Baseball Analysts. At this point, I feel comfortable taking a pitcher’s reported fastball speed from BIS at face value. In addition, we confirmed that, on average as a group, pitchers throw their fastballs at roughly the same speed in adjacent seasons. With these two findings under our belts, we are ready to proceed with the original investigation about how the change in a pitcher’s fastball speed affects his performance.

Earlier, we estimated that a starter’s run average would increase by about 0.25 for every mph lost off his fastball, and 0.45 per mph for a reliever. But do these effects apply across the board? Perhaps some pitchers are more vulnerable to a performance drop when their velocity slips. Let’s take a look.

Are some types of pitchers affected more than others?

Left-handed pitchers have a reputation as crafty soft-tossers. (Righties do throw about two mph harder on average than lefties, and this holds true for both starters and relievers.) Perhaps this craftiness enables them to avoid the performance drops that right-handed pitchers suffer when they lose fastball speed. It turns out that speed changes result in similar performance changes for both lefties and righties. Here are the RA changes per 1-mph increase in fastball speed:

        Starters  Relievers
LHP      -0.26     -0.41
RHP      -0.26     -0.43

Of course, maybe it’s not handedness that matters. Maybe it’s what we speculated earlier, that pitchers without a blazing fastball are forced to master other parts of their craft in order to survive in the major leagues. What do the RA changes per increase in speed look like when we group our pitchers together on the basis of fastball speed?

Run average change per mph increase, group by velocity

There does appear to be some truth to our speculation, both at the league level, as we saw earlier, and at the individual pitcher level, as shown here. Pitchers that throw slower to begin with are less affected by either an increase or a decrease in their fastball speed.

Another piece of common baseball wisdom that we can check is that young pitchers enter the league as throwers, reliant on velocity, and those who survive are the pitchers who learn with age and experience to do other things besides throw hard. Let’s group the pitchers by age and see how RA varies with fastball speed changes. I divided the pitchers into nine age groups based on their age in the first season in the pair. To get the age in the second season, just add one to these age numbers in the graph.

Run average change per mph increase, group by age

The youngest pitchers are the most sensitive to fastball speed changes. This is the Madison Bumgarner and Brett Anderson group. Then when pitchers reach their latter 20s, they are much less sensitive to fastball speed changes (or at least those who stay in the major leagues are). Those who last into their 30s are more sensitive again to increases or decreases in velocity. It appears that the oldest group, those over 37, are the least sensitive to changes in fastball speed, both for starters and relievers. That is an interesting result if it’s real.

I don’t fully understand what’s going on with the pitchers in their 30s. In the oldest group, I suppose that at least some of what we see is selective sampling at play. If an old pitcher loses speed on his fastball and loses effectiveness, he probably retires in pretty quick order. What we are left with then, in the oldest sample, is the pitchers who are more immune to loss of velocity. But I can’t figure out a way to explain the rest of the age groups with selective sampling reasons.

One thing we need to be careful with when using the poor man’s regression is that our choices of where to divide the bins introduces some noise into our results. One can minimize that by being wise about where he divides the bins such that the sample sizes in each bin are approximately equal, but even then some noise can be introduced. One way to check for that is to shift the bins around. When I did that with the aging graph, I got very similar results, so I’m pretty confident we’re not just looking at noise from the binning process. I think we may be looking at a real effect from a pitcher’s typical maturation process, at least what we see in the graph prior to age 35.

Summary and conclusions

We could continue to break this data down 10 different ways from Sunday, and there may be some value in that, but this was enough for me for now. We’ve come to a fairly satisfying answer to the question of how much a pitcher’s performance is affected if he gains or loses a mph on his fastball.

Being able to bring the heat is a very important factor in a pitcher’s success. Being able to crank it up a notch typically improves a pitcher’s run prevention abilities, and losing a notch hurts his effectiveness. Starting pitchers improve by about one run allowed per nine innings for every gain of 4 mph, and relief pitchers improve by about one run per nine innings for every gain of 2.5 mph.

We saw that there is a significant selection bias that plagues any attempt at measuring this effect, and I don’t claim to have been able to remove all of it here. Any study that looks at performance changes in Major League Baseball over time needs to account for players entering and leaving the sample, particularly since lower performers are more likely to leave the sample (as they are shown the door by the management).

We also found that the youngest pitchers and the hardest throwers are the most affected by a shift up or down in their fastball velocity. This was true for both starters and relievers, lefties and righties alike.

References & Resources
Thanks to Fangraphs for providing the BIS fastball speed data for this analysis.

In addition to the articles already mentioned, this thread at the Book Blog about Stephen Strasburg and these two discussions at Rotojunkie contributed to the genesis of this article.


7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
David Gassko
14 years ago

Good stuff, Mike!

Graham
14 years ago

Really, really fantastic piece, Mike.  As a Giants fan, this doesn’t exactly put to rest my fears about Bumgarner (although he has allegedly quickened his mechanics since heading to minor league camp and has been reported as throwing at 90-92 and touching 93 since the tweak), but it is fascinating nonetheless. 

As to the question of the trending for the late-20s and early-30s groups, I wonder if this might be explained somewhat by retaining player identity in your surveys?  Just a guess based on anecdotal evidence, but I would imagine that a lot more players wash out of the majors in their late-20s than do in their early-30s.  My reasoning is that a pitcher may be considered to have a certain amount of potential or upside well into his 20s, but that at a certain point a prospect becomes a known quantity.  I would assume that most pitchers sticking around till their 32nd or 33rd birthday have probably stuck in the bigs for actual year-to-year value; but I would guess that pitchers even as old as 28 or 29 may still benefit from an organization believing they can figure something out.  Does this make sense, or am I just fabricating a deduction to explain the data?

Mike Fast
14 years ago

Thanks, Graham.

Are you suggesting that I look at multi-year fastball velocity changes to explain what’s going on in the late-20s and early-30s age groups?

I’m open to suggestions about what is going on in that graph because for just about every explanation that I could come up with, I could make the opposite explanation sound almost as plausible if I hadn’t seen the graph.  Or as you put it, “fabricating a deduction to explain the data”.

With that caveat out of the way, I would expect that if organizations were less patient with pitchers after they hit age 30, that those who lost fastball speed and effectiveness would be quickly out of the sample, and that would bias the numbers upward for those ages, much as I posited for the retirement effect after age 35.

MGL
14 years ago

Mike,

Absolutely fantastic stuff!

This is all based on the BIS data?  When you were wondering if BIS might be biased from year to year, why didn’t you use pitch f/x to see if that was true?  After all, you are also the pitch f/x man!

Love your use of “poor man’s regression.”  Much easier to use and visually understand and interpret the graphs.

MGL

studes
14 years ago

Just adding my kudos. Fantastic work, Mike.  And very readable to boot.

Mike Fast
14 years ago

Thanks, Mickey, Dave, and David.

MGL, yes this is all based on BIS data.  I had done a similar investigation a year or so ago based on PITCHf/x data that came to similar but much more limited conclusions, but I didn’t include any of that in this study.  The BIS data covers many more years at this point than does PITCHf/x.

The simple answer to your question is that I simply didn’t think about looking at PITCHf/x to double check the 2007-2009 velocity increase.

It seems obvious now that you say it.  I suppose the reason I didn’t has something to do with the fact that I didn’t come to the conclusion about velocity increase from 2007-2009 exactly the way the logic is presented in the article.

I came across the Dan’s article and the cutter explanation after I had already done a fair amount of investigation into the matter.  Adding in the cutter correction made it obvious that the increase had happened mostly in 2007-2009; that was not so obvious to me before that.

You make some interesting comments at the Book blog, particularly about an increase in overall MLB fastball velocity implying an overall increase in pitcher effectiveness, which might be responsible for depressing run scoring.  I had viewed the question of average MLB velocity as an aside to my research and a sanity check, but that puts it into the realm of something that merits further research in its own right.

Jonathan
14 years ago

Great post! Tons of detail!