Baseball in 1968 will forever be known as the year of the pitcher. Run scoring was the lowest it had ever been in modern baseball. There were concerns the game was so far stacked in the pitcher’s advantage that it was no longer enjoyable to watch. To help the hitters, the pitching mound was lowered from 15 inches to 10, and the strike zone was returned to its 1961 size.
The run-scoring environment in 1969 was much greater than it was in 1968, with teams averaging 0.65 more runs per game (going from 3.42 to 4.07), an increase of greater than 19 percent. This change in run scoring often is attributed to the lowering of the pitching mound, but in reality the problem is much more complex.
The run-scoring environment is always changing in baseball, but there’s more to the offensive environment than how many runs are scored per game. The 1912 and 1961 campaigns look very similar in terms of runs per game, but the ways in which those runs were scored were very different. The offensive environment of 1961 was driven by home runs, while 1912 baseball barely featured any long ball. Using linear weights to break down run scoring into its components (strikeouts, walks, home runs, and balls in play) gives a better picture.
Looking at linear weights, the difference between 1912 and 1961 becomes glaringly obvious. In 1912, home runs contributed around 0.27 runs created per game, and in 1961 home runs created about 1.45 runs per game.
Now that we have a better way of understanding run scoring, we can see what drove the drastic change in scoring between 1968 and ’69 and also see how much the mound change affected things. Two things stick out. First, all components of offense improve while strikeouts appear to prevent more runs. The frequency of strikeouts actually was lower in 1969 than it was in 1968, but the value of strikeouts was much higher because it was a higher run-scoring environment.
|Run Value of a Single Event|
As the run-scoring environment changes, so does the run value of different events. During the year of the pitcher, walks were not that costly to pitchers because the chance of the batter scoring was very low. You can view the low value of strikeouts like diminishing returns; there is only so much extra value a pitcher can accumulate. Walks in high run-scoring environments compound on each other, so each additional walk is slightly more valuable. This means that using linear weights to determine the contribution of each component is slightly flawed because their value at the margins is different from their average value.
If the strikeout rate in 1969 was the same as it was in ’68, we would expect 0.033 fewer runs scored, so we can say part of the offensive increase was driven by a decrease in strikeouts.
Looking at the table, we see that the offensive increase was driven by mostly by walks (0.366 run increase) and home runs (0.292 run increase). Now, as we saw with strikeouts, changes in linear weights can distort outcomes without changes in event frequency.
I came up with two theories to explain why walks increased between 1968 and 1969. Either pitchers had worse control in ’69 than they did in ’68, or the decrease in the size of the strike zone led to more pitches being called balls instead of strikes, thus leading to more walks.
There are several reasons to believe that pitchers may have had worse control. First, there was the league expansion, which lowers player quality. Second, there were talks of a strike before the 1969 season, so lots of players showed up for the season less prepared than normal. Third, pitchers had to adjust to the height of the new pitching mounds.
These three explanations are pretty weak. Veteran pitchers saw see their walk rates increase? If pitchers were out of shape, then why where walks still up the next year when there were no concerns about a strike? And if adjusting to a different pitching mound height was so challenging, why did visiting pitchers have no such problems in the previous seasons when mound heights notoriously varied from one stadium to the next, often by amounts greater than five inches?
Although I am skeptical about the idea of a sudden decrease in pitcher control, I still wanted to make sure that was not the case. The question then becomes, how does one measure pitcher control?
Normally, the best stat (excluding Pitch-f/x) to approximate control is non-intentional walks per batter faced, but we cannot use walks because we want to see the effect control has on walks. So instead, I used wild pitches per batters faced as a proxy for pitcher control. It correlates well–but not too well–with walks (R2 around 0.15), because walk rate is determined by more than just control. The wild-pitch rate did actually increase by five percent in 1969, but the walk rate increased by 20 percent. The increase in walk rate is greater than what would be explained by our proxy measure for control.
Looking at the graph, you will notice that the y-intercept for the linear fit on the 1969 graph is about 22 percent higher than that of the 1968 fit. The trend that wild pitchers walk more batters is present booth years, but the 1969 trend line appears to be shifted up vertically. This implies that pitchers with comparable control (i.e., the same frequency of wild pitches) would be expected to walk more hitters in 1969 than in 1968. This leads me to believe that reduced pitching control played little to no role in the increased number of walks.
The idea that a pitcher with the same control would suddenly walk more batters leads to the next possible culprit, the strike zone. The strike zone change in 1969 is not as well known as the change in the pitching mound. Pitchers at the time did not seem too concerned with the change of the strike zone. Ron Kline said the change would not affect the game; he was out of baseball after 1970.
While Kline correctly realized that the strike zone in the rule book and the one the umpires call are two very different things, what he did not account for was that the zone an umpire calls is affected by the rulebook definition of the zone. So while umpires did not suddenly start calling a perfect strike zone that goes from the top of batter’s knees to his arm pits (previously,, it stretched from the bottom of the knees to the top of the shoulders), they most likely started calling a smaller zone, as the rule change had intended.
Looking back at the graph of wild pitches, it is interesting to note that the R2 term and the slope are larger in 1969 than they were in 1968. This implies that, although pitching control did not drive the increase in walks, it explained more of the variation in walk rates, which you might expect to see with a smaller strike zone. It’s not enough evidence to be conclusive, but it jibes with the idea that umpires were calling a smaller strike zone, which would make all pitchers walk more batters, though the pitchers with worse control would suffer more.
What we can do is look at times in baseball history when the strike zone underwent a comparable change. In 1963, the strike zone was lengthened height-wise; the 1969 change actually was just a return to the 1962 strike zone. Thus, changes between 1962 and 1963 caused by the strike zone should be the opposite of the changes seen in 1969.
|Changes from 1962-1963 vs. 1968-1969|
|Year||R/G||K Rate||NIBB Rate||BB Rate||IBB Rate||WP Rate||HBP Rate||HR Rate|
Looking at the effects of the increased strike zone in 1963, we see a decrease in BB% (around 15 percent) and an increase in K% (around nine percent). The two strike zone changes do not appear to be exactly opposite but they are fairly close. Increasing the size of the strike zone decreased BB% by 15 percent and increased K% by eight percent, while shrinking the strike zone increased BB% by 23 percent and decreased K% by four percent.
Another interesting point to note is that the rate of home runs changed during both strike zone changes. While historically there is a fairly strong correlation between K% and home run rate, during both of these changes home runs and strikeouts moved in opposite directions (home runs increased and strikeout decreased, and vise versa).
Why did the ’69 strike zone changes have more of an effect on walks and less of an effect on strikeouts than the ’63 strike zone change? The answer is not obvious. If the changes were perfectly symmetrical, I’d be very skeptical, because baseball goes through random fluctuations and changes in style of play. Strikeouts already were trending upwards before the strike zone change in 1963 (although not at the same rate), so there was already some other force pushing baseball toward more strikeouts. The 1968 season was a year of incredibly low walks, so it’s likely part of the increase in walks was driven by regression to the mean (the five-year average BB% from 1964 to 1968 was around 17 percent lower than the 1969 BB%).
While today with Pitch-f/x we can actually measure changes in the strike zone and make fairly accurate estimates of the effects of these changes, I cannot make any statements with that degree of certainty. Based on the evidence, I believe it’s extremely likely that the dominant driving factor behind the increase in walks after 1968 was the change in the strike zone.
You often hear about pitchers throwing with a good downhill plane, the idea being that a ball thrown on a more downward trajectory is harder to hit and when hit will generate lots of groundballs. This is one of the reasons scouts like tall pitchers, because taller pitchers release the ball from a higher point, therefore throwing it on a more downward trajectory. Dropping the height of the mound by five inches meant that pitchers were closer to level with the batter.
If throwing downhill leads to ground balls, then we would expect to see fewer ground balls, and fewer ground balls means more home runs. We did see more home runs after the mound was lowered, but it’s not clear if the home run spike was caused by an increase in fly balls. We do not have complete batted-ball data for the 1960s, but we do have records of ground outs and air outs, which have been shown to do a decent job of estimating groundball rates. Between 1968 and 1969, the GO/AO (ground outs-to-air outs ratio) barely changed at all–it actually increased slightly. This could lead one to believe lowering the mound had almost no effect on ground balls and rather may have even increased them.
It turns out that the idea of throwing downhill to induce ground balls is more myth than fact. As Doug Thorburn showed using Pitch-f/x, there is no correlation between release height and GB%. What makes the mound change in 1969 so interesting is that it artificially lowered pitchers’ release points while not drastically changing their stuff or their mechanics. You can compare individual players to themselves, and there is no appreciable change in GO/AO. (As a side note, if a five-inch change in mound height has no significant effect on groundball rates, then it goes to reason that any slight mechanical adjustments a pitcher makes to increase the height of his release point will have no effect on his groundball rate).
Now, GO/AO is far from a perfect measure of groundball rates, so it is possible that a higher percentage of ground balls went for hits in 1968, or that more fly balls went for hits in 1969, which could mask an actual decrease in groundball rates. (An important note is that I am counting double plays as only one out, so the increase in double play opportunities in 1969 would not artificially inflate the number of ground outs).
I’m not saying thet mound change had no effect on groundball rates; it just had no effect large enough for us to see in the data that we have. While on the topic of GO/AO, it is interesting to note that the league average was trending upwards quickly, from the late 50s on through the 60s, and that ’69 is actually the year with the highest GO/AO ever.
Home runs clearly increased in 1969, as did BABIP. Batters obviously were hitting balls harder than they had in the past, leading to more home runs and more hits on balls in play. Why were batters suddenly squaring up pitches more easily than they had done in the past? The lowered pitching mound could be one of the contributing factors.
While throwing on less of a downhill plane may not have drastically changed groundball rates, it very well could have affected the quality of contact. Also, lowering pitching mound reduces the amount of momentum the pitcher generates coming off the mound, or “oomph” as Camilo Pascual so eloquently put it. Denny McLain thought the lower mound would strain pitchers more, making it more difficult for them to pitch 250-300 innings. Some pitchers complained of their arms getting sore in spring training, thinking the lower mound might be the cause. Brian Bannister noted that, “From a pitcher’s perspective, one of the interesting things about pitching off of a lower mound is that you tend to get more sore afterwards.” So whether from lack of a downhill plane, lack of “oomph” or general fatigue, it seems very likely that the lower mound contributed to batters seeing more hittable pitches in 1969.
Now let’s not give the lowered mound all of the credit for the increase in home runs and BABIP. The smaller strike zone likely contributed to at least some of these increases. The effects of the strike zone extend beyond walks and strikeouts.
Imagine the strike zone is enormous and almost any pitch gets called for a strike. The batter will have to swing at pitches nowhere near home plate, and if they make contact, it will most likely be feeble. Now, when a strike zone shrinks, the opposite should happen, as the batter no longer needs to swing at pitches he only can hit weakly. Additionally, a smaller strike zone means pitchers who fall behind the count are forced to throw more hittable pitches.
In 2013, batters had a .303 BABIP and .197 ISO when ahead in the count and a .288 BABIP and .092 ISO when behind. Once again, we cannot measure how many more favorable counts arose because of the new smaller strike zone, so we will have to resort to looking at the changes seen when the strike zone first was increased in size. Let’s return to the strike zone change in 1963 to see if there were any significant effects on quality of contact that we could attribute to strike zone changes.
We do see a sizeable drop in home runs in 1963, but it’s not even close to the change we see from 1968 to ’69. So while it seems very plausible a decreased strike zone should increase home runs, the increase seen in 1969 looks to be caused by more than just a zone change. We saw around nine percent decrees in home run frequency in ’63 and a 27 percent increase in home run frequency in ’69, so I’d be comfortable saying around one third of the increase in ’69 was driven by the strike zone change.
It’s very surprising to see how much BABIP dropped when the strike zone was increased and how much it jumped back up when the strike zone shrunk (both changes were greater than five points). But the other obvious trend is that a five-point change in BABIP between seasons isn’t too uncommon, as there is a fair amount of randomness. After the drop in BABIP in the ’63 season, it shoots back up in ’64, which makes me believe that a lot of the change between ’62 and ’63 is due to random fluctuation. Therefore, while the reduced strike zone likely has some positive effect on BABIP, it’s not clear how much.
A few extraneous factors could have contributed to increased home runs and BABIP. Pitcher Jim Hannan noted that the baseballs used in the ’68 season were particularly soft, making it more challenging for batterers to hit home runs. But this is just anecdotal evidence. I looked at the home run graph again and saw 1968 was a bit of an outlier in terms of home runs allowed, low even compared to 1967. (The same comment was made during spring training in 1969, so it’s not like Hannan was making excuses for the more than four-fold increase in home runs he gave up in the 1969 season).
Another possible effect was the change in ballpark dimensions. In an effort to increase hitting, several teams moved outfield fences closer to home plate, and the Dodgers notably moved their home plate closer to the fence. Another stadium change was the switch to synthetic turf in some stadiums. The new turf was supposed to let ground balls move faster, making them more likely to go for a hit. The BABIP for turf is much higher than for grass, but while the number of games played on turf increased in 1969, the actual percentage of games played on turf went down slightly. Also, the BABIP on grass increased in 1969.
One other interesting note: if changes in the height of the pitching mound had an effect, it would be greatest on the Dodgers, who were famous for having the tallest pitching mound in 1968. The Dodgers had the lowest-scoring stadium in ’68, but no longer in ’69. So while part of this could be from the Dodgers having to lower their pitching mound more than any other team, they did see most of the new scoring coming from home runs, which was due partly to their closer fences. Also, the Dodgers saw their GO/AO go up in ’69, so they definitely weren’t relying on a high pitching mound to get ground outs.
What role could the mound change have? While the change in the pitching mound coincided with a large change in the offensive environment, it is clear that it did not cause all of the change. There were so many modifications made to baseball in 1969 that we have too many confounding variables to definitely say what were the actual effects of the mound change. This change did have an effect, but it is much smaller than we might have guessed.
Much of the change in run scoring can be attributed to other changes in the game. The increase in walks was not caused by a change to the pitching mound, but was caused almost entirely by the strike zone change. The decrease in strikeouts is right in line with what we would expect from the change in the strike zone.
The only remaining area where the mound change could have had a significant impact is on home runs, but the home run increase is not entirely a result of the mound change, as at least some of it was caused by the diminished strike zone, changing park dimensions and other factors. My best estimate is that the mound change accounted for at most 25 percent of the increase in run scoring and half of the increase in home runs. This is in no way an exact number but just a best guess given the data available. If MLB ever decides to change the height of the pitching mound, I would appreciate it if they can could do it one league at a time so we can see how much it matters.
References and Resources
1. Kline Sure New Zone Won’t Change Game,” St. Petersburg Times
2. “The Strike Zone During the PITCHf/x Era,” The Hardball Times
3. “Converting GO/AO to GB%,” FanGraphs
4. “Downhill From Here,” Baseball Prospectus
5. “The OTHER Way We Could Move the Mound,” Baseball Prospectus
6. Sports Illustrated Vault