The Height of the Hill

The pitching mound isn't as tall as it used to be (via slgckgc).

The pitching mound isn’t as tall as it used to be (via slgckgc).

In 1969, the height of the mound changed. A higher mound is supposed to help the pitcher because it lets him throw downhill, but the advantage of throwing from a higher mound is not well understood. If only Major League Baseball had gone about changing the pitching mound height in a more scientific manner instead of lumping it together with a smaller strike zone. Analyzing the effect of a lower pitching mound is difficult, both because of the smaller strike zone and because we don’t have data from the 1960s like we have today.

Baseball in 1968 will forever be known as the year of the pitcher. Run scoring was the lowest it had ever been in modern baseball. There were concerns the game was so far stacked in the pitcher’s advantage that it was no longer enjoyable to watch. To help the hitters, the pitching mound was lowered from 15 inches to 10, and the strike zone was returned to its 1961 size.

The run-scoring environment in 1969 was much greater than it was in 1968, with teams averaging 0.65 more runs per game (going from 3.42 to 4.07), an increase of greater than 19 percent. This change in run scoring often is attributed to the lowering of the pitching mound, but in reality the problem is much more complex.

Resor---Runs-Per-Game

The run-scoring environment is always changing in baseball, but there’s more to the offensive environment than how many runs are scored per game. The 1912 and 1961 campaigns look very similar in terms of runs per game, but the ways in which those runs were scored were very different. The offensive environment of 1961 was driven by home runs, while 1912 baseball barely featured any long ball. Using linear weights to break down run scoring into its components (strikeouts, walks, home runs, and balls in play) gives a better picture.

Resor -- Run-Elements
Runs per game contributed by different plate appearance outcomes. The effect of strikeouts is negative, thus causing the line to start below the x axis.

Looking at linear weights, the difference between 1912 and 1961 becomes glaringly obvious. In 1912, home runs contributed around 0.27 runs created per game, and in 1961 home runs created about 1.45 runs per game.

Now that we have a better way of understanding run scoring, we can see what drove the drastic change in scoring between 1968 and ’69 and also see how much the mound change affected things. Two things stick out. First, all components of offense improve while strikeouts appear to prevent more runs. The frequency of strikeouts actually was lower in 1969 than it was in 1968, but the value of strikeouts was much higher because it was a higher run-scoring environment.

Run Value of a Single Event
Year HR/PA SO/PA BB/PA HBP/PA R/PA HR SO BB HBP
1968 0.017 0.158 0.076 0.006 0.092 1.400 -0.200 0.266 0.292
1969 0.021 0.152 0.091 0.006 0.107 1.400 -0.233 0.292 0.317
% Diff 1.273 0.957 1.195 0.938 1.163 1.000 1.165 1.095 1.087

As the run-scoring environment changes, so does the run value of different events. During the year of the pitcher, walks were not that costly to pitchers because the chance of the batter scoring was very low. You can view the low value of strikeouts like diminishing returns; there is only so much extra value a pitcher can accumulate. Walks in high run-scoring environments compound on each other, so each additional walk is slightly more valuable. This means that using linear weights to determine the contribution of each component is slightly flawed because their value at the margins is different from their average value.

If the strikeout rate in 1969 was the same as it was in ’68, we would expect 0.033 fewer runs scored, so we can say part of the offensive increase was driven by a decrease in strikeouts.

Looking at the table, we see that the offensive increase was driven by mostly by walks (0.366 run increase) and home runs (0.292 run increase). Now, as we saw with strikeouts, changes in linear weights can distort outcomes without changes in event frequency.

Walks

I came up with two theories to explain why walks increased between 1968 and 1969. Either pitchers had worse control in ’69 than they did in ’68, or the decrease in the size of the strike zone led to more pitches being called balls instead of strikes, thus leading to more walks.

There are several reasons to believe that pitchers may have had worse control. First, there was the league expansion, which lowers player quality. Second, there were talks of a strike before the 1969 season, so lots of players showed up for the season less prepared than normal. Third, pitchers had to adjust to the height of the new pitching mounds.

These three explanations are pretty weak. Veteran pitchers saw see their walk rates increase? If pitchers were out of shape, then why where walks still up the next year when there were no concerns about a strike? And if adjusting to a different pitching mound height was so challenging, why did visiting pitchers have no such problems in the previous seasons when mound heights notoriously varied from one stadium to the next, often by amounts greater than five inches?

Although I am skeptical about the idea of a sudden decrease in pitcher control, I still wanted to make sure that was not the case. The question then becomes, how does one measure pitcher control?

Normally, the best stat (excluding Pitch-f/x) to approximate control is non-intentional walks per batter faced, but we cannot use walks because we want to see the effect control has on walks. So instead, I used wild pitches per batters faced as a proxy for pitcher control. It correlates well–but not too well–with walks (R2 around 0.15), because walk rate is determined by more than just control. The wild-pitch rate did actually increase by five percent in 1969, but the walk rate increased by 20 percent. The increase in walk rate is greater than what would be explained by our proxy measure for control.

Resor---Wild-Pitches

Looking at the graph, you will notice that the y-intercept for the linear fit on the 1969 graph is about 22 percent higher than that of the 1968 fit. The trend that wild pitchers walk more batters is present booth years, but the 1969 trend line appears to be shifted up vertically. This implies that pitchers with comparable control (i.e., the same frequency of wild pitches) would be expected to walk more hitters in 1969 than in 1968. This leads me to believe that reduced pitching control played little to no role in the increased number of walks.

The idea that a pitcher with the same control would suddenly walk more batters leads to the next possible culprit, the strike zone. The strike zone change in 1969 is not as well known as the change in the pitching mound. Pitchers at the time did not seem too concerned with the change of the strike zone. Ron Kline said the change would not affect the game; he was out of baseball after 1970.

While Kline correctly realized that the strike zone in the rule book and the one the umpires call are two very different things, what he did not account for was that the zone an umpire calls is affected by the rulebook definition of the zone. So while umpires did not suddenly start calling a perfect strike zone that goes from the top of batter’s knees to his arm pits (previously,, it stretched from the bottom of the knees to the top of the shoulders), they most likely started calling a smaller zone, as the rule change had intended.

Looking back at the graph of wild pitches, it is interesting to note that the R2 term and the slope are larger in 1969 than they were in 1968. This implies that, although pitching control did not drive the increase in walks, it explained more of the variation in walk rates, which you might expect to see with a smaller strike zone. It’s not enough evidence to be conclusive, but it jibes with the idea that umpires were calling a smaller strike zone, which would make all pitchers walk more batters, though the pitchers with worse control would suffer more.

What we can do is look at times in baseball history when the strike zone underwent a comparable change. In 1963, the strike zone was lengthened height-wise; the 1969 change actually was just a return to the 1962 strike zone. Thus, changes between 1962 and 1963 caused by the strike zone should be the opposite of the changes seen in 1969.

Changes from 1962-1963 vs. 1968-1969
Year R/G K Rate NIBB Rate BB Rate IBB Rate WP Rate HBP Rate HR Rate
1962 4.46 0.141 0.081 0.088 0.007 0.008 0.006 0.024
1963 3.95 0.153 0.071 0.078 0.008 0.008 0.006 0.022
62/63 1.13 0.920 1.147 1.120 0.868 1.013 0.983 1.090
1968 3.42 0.158 0.066 0.076 0.010 0.008 0.006 0.017
1969 4.07 0.152 0.081 0.091 0.010 0.009 0.006 0.021
69/68 1.19 0.957 1.231 1.195 0.960 1.048 0.938 1.273

Looking at the effects of the increased strike zone in 1963, we see a decrease in BB% (around 15 percent) and an increase in K% (around nine percent). The two strike zone changes do not appear to be exactly opposite but they are fairly close. Increasing the size of the strike zone decreased BB% by 15 percent and increased K% by eight percent, while shrinking the strike zone increased BB% by 23 percent and decreased K% by four percent.

Another interesting point to note is that the rate of home runs changed during both strike zone changes. While historically there is a fairly strong correlation between K% and home run rate, during both of these changes home runs and strikeouts moved in opposite directions (home runs increased and strikeout decreased, and vise versa).

Why did the ’69 strike zone changes have more of an effect on walks and less of an effect on strikeouts than the ’63 strike zone change? The answer is not obvious. If the changes were perfectly symmetrical, I’d be very skeptical, because baseball goes through random fluctuations and changes in style of play. Strikeouts already were trending upwards before the strike zone change in 1963 (although not at the same rate), so there was already some other force pushing baseball toward more strikeouts. The 1968 season was a year of incredibly low walks, so it’s likely part of the increase in walks was driven by regression to the mean (the five-year average BB% from 1964 to 1968 was around 17 percent lower than the 1969 BB%).

While today with Pitch-f/x we can actually measure changes in the strike zone and make fairly accurate estimates of the effects of these changes, I cannot make any statements with that degree of certainty. Based on the evidence, I believe it’s extremely likely that the dominant driving factor behind the increase in walks after 1968 was the change in the strike zone.

Home Runs

You often hear about pitchers throwing with a good downhill plane, the idea being that a ball thrown on a more downward trajectory is harder to hit and when hit will generate lots of groundballs. This is one of the reasons scouts like tall pitchers, because taller pitchers release the ball from a higher point, therefore throwing it on a more downward trajectory. Dropping the height of the mound by five inches meant that pitchers were closer to level with the batter.

If throwing downhill leads to ground balls, then we would expect to see fewer ground balls, and fewer ground balls means more home runs. We did see more home runs after the mound was lowered, but it’s not clear if the home run spike was caused by an increase in fly balls. We do not have complete batted-ball data for the 1960s, but we do have records of ground outs and air outs, which have been shown to do a decent job of estimating groundball rates. Between 1968 and 1969, the GO/AO (ground outs-to-air outs ratio) barely changed at all–it actually increased slightly. This could lead one to believe lowering the mound had almost no effect on ground balls and rather may have even increased them.

It turns out that the idea of throwing downhill to induce ground balls is more myth than fact. As Doug Thorburn showed using Pitch-f/x, there is no correlation between release height and GB%. What makes the mound change in 1969 so interesting is that it artificially lowered pitchers’ release points while not drastically changing their stuff or their mechanics. You can compare individual players to themselves, and there is no appreciable change in GO/AO. (As a side note, if a five-inch change in mound height has no significant effect on groundball rates, then it goes to reason that any slight mechanical adjustments a pitcher makes to increase the height of his release point will have no effect on his groundball rate).

Now, GO/AO is far from a perfect measure of groundball rates, so it is possible that a higher percentage of ground balls went for hits in 1968, or that more fly balls went for hits in 1969, which could mask an actual decrease in groundball rates. (An important note is that I am counting double plays as only one out, so the increase in double play opportunities in 1969 would not artificially inflate the number of ground outs).

I’m not saying thet mound change had no effect on groundball rates; it just had no effect large enough for us to see in the data that we have. While on the topic of GO/AO, it is interesting to note that the league average was trending upwards quickly, from the late 50s on through the 60s, and that ’69 is actually the year with the highest GO/AO ever.

Resor---GO-AO

Home runs clearly increased in 1969, as did BABIP. Batters obviously were hitting balls harder than they had in the past, leading to more home runs and more hits on balls in play. Why were batters suddenly squaring up pitches more easily than they had done in the past? The lowered pitching mound could be one of the contributing factors.

While throwing on less of a downhill plane may not have drastically changed groundball rates, it very well could have affected the quality of contact. Also, lowering pitching mound reduces the amount of momentum the pitcher generates coming off the mound, or “oomph” as Camilo Pascual so eloquently put it. Denny McLain thought the lower mound would strain pitchers more, making it more difficult for them to pitch 250-300 innings. Some pitchers complained of their arms getting sore in spring training, thinking the lower mound might be the cause. Brian Bannister noted that, “From a pitcher’s perspective, one of the interesting things about pitching off of a lower mound is that you tend to get more sore afterwards.” So whether from lack of a downhill plane, lack of “oomph” or general fatigue, it seems very likely that the lower mound contributed to batters seeing more hittable pitches in 1969.

Now let’s not give the lowered mound all of the credit for the increase in home runs and BABIP. The smaller strike zone likely contributed to at least some of these increases. The effects of the strike zone extend beyond walks and strikeouts.

Imagine the strike zone is enormous and almost any pitch gets called for a strike. The batter will have to swing at pitches nowhere near home plate, and if they make contact, it will most likely be feeble. Now, when a strike zone shrinks, the opposite should happen, as the batter no longer needs to swing at pitches he only can hit weakly. Additionally, a smaller strike zone means pitchers who fall behind the count are forced to throw more hittable pitches.

In 2013, batters had a .303 BABIP and .197 ISO when ahead in the count and a .288 BABIP and .092 ISO when behind. Once again, we cannot measure how many more favorable counts arose because of the new smaller strike zone, so we will have to resort to looking at the changes seen when the strike zone first was increased in size. Let’s return to the strike zone change in 1963 to see if there were any significant effects on quality of contact that we could attribute to strike zone changes.

Resor---HR-Rate

We do see a sizeable drop in home runs in 1963, but it’s not even close to the change we see from 1968 to ’69. So while it seems very plausible a decreased strike zone should increase home runs, the increase seen in 1969 looks to be caused by more than just a zone change. We saw around nine percent decrees in home run frequency in ’63 and a 27 percent increase in home run frequency in ’69, so I’d be comfortable saying around one third of the increase in ’69 was driven by the strike zone change.

Resor---BABIP

It’s very surprising to see how much BABIP dropped when the strike zone was increased and how much it jumped back up when the strike zone shrunk (both changes were greater than five points). But the other obvious trend is that a five-point change in BABIP between seasons isn’t too uncommon, as there is a fair amount of randomness. After the drop in BABIP in the ’63 season, it shoots back up in ’64, which makes me believe that a lot of the change between ’62 and ’63 is due to random fluctuation. Therefore, while the reduced strike zone likely has some positive effect on BABIP, it’s not clear how much.

A few extraneous factors could have contributed to increased home runs and BABIP. Pitcher Jim Hannan noted that the baseballs used in the ’68 season were particularly soft, making it more challenging for batterers to hit home runs. But this is just anecdotal evidence. I looked at the home run graph again and saw 1968 was a bit of an outlier in terms of home runs allowed, low even compared to 1967. (The same comment was made during spring training in 1969, so it’s not like Hannan was making excuses for the more than four-fold increase in home runs he gave up in the 1969 season).

Another possible effect was the change in ballpark dimensions. In an effort to increase hitting, several teams moved outfield fences closer to home plate, and the Dodgers notably moved their home plate closer to the fence. Another stadium change was the switch to synthetic turf in some stadiums. The new turf was supposed to let ground balls move faster, making them more likely to go for a hit. The BABIP for turf is much higher than for grass, but while the number of games played on turf increased in 1969, the actual percentage of games played on turf went down slightly. Also, the BABIP on grass increased in 1969.

One other interesting note: if changes in the height of the pitching mound had an effect, it would be greatest on the Dodgers, who were famous for having the tallest pitching mound in 1968. The Dodgers had the lowest-scoring stadium in ’68, but no longer in ’69. So while part of this could be from the Dodgers having to lower their pitching mound more than any other team, they did see most of the new scoring coming from home runs, which was due partly to their closer fences. Also, the Dodgers saw their GO/AO go up in ’69, so they definitely weren’t relying on a high pitching mound to get ground outs.

In Conclusion

What role could the mound change have? While the change in the pitching mound coincided with a large change in the offensive environment, it is clear that it did not cause all of the change. There were so many modifications made to baseball in 1969 that we have too many confounding variables to definitely say what were the actual effects of the mound change. This change did have an effect, but it is much smaller than we might have guessed.

Much of the change in run scoring can be attributed to other changes in the game. The increase in walks was not caused by a change to the pitching mound, but was caused almost entirely by the strike zone change. The decrease in strikeouts is right in line with what we would expect from the change in the strike zone.

The only remaining area where the mound change could have had a significant impact is on home runs, but the home run increase is not entirely a result of the mound change, as at least some of it was caused by the diminished strike zone, changing park dimensions and other factors. My best estimate is that the mound change accounted for at most 25 percent of the increase in run scoring and half of the increase in home runs. This is in no way an exact number but just a best guess given the data available. If MLB ever decides to change the height of the pitching mound, I would appreciate it if they can could do it one league at a time so we can see how much it matters.

References and Resources

1. Kline Sure New Zone Won’t Change Game,” St. Petersburg Times
2. “The Strike Zone During the PITCHf/x Era,” The Hardball Times
3. “Converting GO/AO to GB%,” FanGraphs
4. “Downhill From Here,” Baseball Prospectus
5. “The OTHER Way We Could Move the Mound,” Baseball Prospectus
6. Sports Illustrated Vault


George is always thinking about baseball, and frequently writes down those thoughts on his blog. Follow him on Twitter @GWRambling.
13 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
aweb
9 years ago

Could the standardization of mound height also have helped batters (more than pitchers), as much as the lowering of it? With pitcher release points varying by a foot naturally, throwing another half a foot due to mound height may have made it even harder to adjust from game to game. I’d expect to see this effect on hitters when they switch venues with notably different mound heights (Aside from Dodger stadium being noted as higher than others, was anywhere notably lower than most?). A similar deliterious effect might be noticable for relievers who pitch consecutive games in different venues (as opposed to pitchers in the same venues as a control group).

“if a 5 inch change in mound height has no significant effect on ground ball rates then it goes to reason that any slight mechanical adjustments a pitcher makes to increase the height of their release point will have no effect on their groundball rate” – I don’t think this necessarily “stands to reason”. A pitcher changing their release point should get different movement on the ball from the change in arm and hand positions, so it is possible that for some pitchers, raising their release points will result in more grounders due to different movement on the pitch. For those pitchers that don’t get this effect, or might get the opposite effect, they wouldn’t make the same adjustment. Raising the release point of every pitcher to get more grounders is likely futile, but it should work for a sub-group of them.

George Resor
9 years ago
Reply to  aweb

Thanks for your comments; you bring up some smart questions. I hadn’t looked into the effects of standardization on batters and relievers when writing the article. I’ve looked into your points and here’s what I found. If the effect of a standardized mound helping the batter was very large, then you would expect to see a larger home road batting split before mount heights were standardized and a smaller split afterwards. (The logic being that for a batter their home mound height is the one that they are used to seeing the most so they would do better against their home mound). Before the mound was standardized, batters hit 3% better at home and 4% worse on the road. After the mound was standardized, batters hit 5% better at home and 5% worse on the road. Batters did hit better at home than they did on the road, but they continued to do so after the pitching mounds were standardized to almost the exactly same degree. So, if the standardization helped the batters, it wasn’t large enough to alter the home road splits. As for the possible effect of a non standardized mound on relief pitchers, relievers pitched 2% worse than average before standardization, and 3% worse than average after. We don’t see a change in the splits for batters vs. starters to batters vs. relieves when mound heights were standardized (although it is interesting to note that batters in the 60’s hit better against relievers than they did against starters; while today the opposite is true).

As for your question about teams with a low pitching mound, I know some people said that the White Sox had to raise their mound to reach the standard height in 1969.
And for your second comment on some pitcher possibly getting more ground balls from a higher release point, yes that might be true. Conventional thought is throwing on a downhill plane generates more ground balls. The point I was trying to make is in general this does not appear to be true.

Steve
9 years ago

You mention as a possible cause that MLB expanded in 1969, but then never really expand upon that. Briefly looking at your final two graphs for HR rate and BABIP there is definitely spikes in both from 68-69, but there are also definite spikes in both from 77-78 and 92-93. There doesn’t seem to be any effect from the 98 expansion, but it was during the height of the steroid era, and folks were jacking HR left and right anyways. I would suggest that the biggest cause for the increased scoring in 1969 is that there were two more terrible teams in each the AL and NL, adding essential 8-10 replacement level starting pitchers to each league.

Big Daddy V
9 years ago
Reply to  Steve

I’ve heard this argument about expansion millions of times and I never quite got it. Yes, there are more bad pitchers in the league to give up runs, but there are also more bad hitters in the league to strike out. It would probably balance out for the most part.

Steve
9 years ago
Reply to  Big Daddy V

Would it though? By adding replacement level hitters you would expect pitchers (especially above average ones) to get more Ks. Against replacement level pitchers one would expect above average hitters to hit more homers and have a higher ISO. Replacement level hitters have to play in the field as well, hurting overall team defense. I would imagine good hitters hitting harder off of replacement level pitchers to more replacement level fielders would skew scoring higher than the good pitchers striking out an extra replacement level hitter per game.

JeremyR
9 years ago

Scientific manner, not manor, though I like the imagery of it.

bucdaddy
9 years ago

On the front end of the strike zone change, I like looking at how if affected Koufax.

Sandy pitched in a progression of home parks from Ebbets Field to the L.A. Coliseum to Dodger Stadium, from a park that was 395 feet at its deepest and had a 297 RF line, to a park that had a 251-foot LF line and 320-foot power alleys to one that had 380-foot power alleys. He goes from being mediocre to fair in 1961, and then very good when he moves into Dodger Stadium, in 1962, and becomes a monster in 1963, throwing to a new strike zone from (IIRC) the top of the shoulders to the bottom of the knee:

http://www.baseball-reference.com/players/k/koufasa01.shtml

The Highlights: He pitches 184 innings one year, 311 the next, with virtually the same number of walks (57/58). (Although I note he’s missing about 12 starts in 1962, maybe due to injury or something: He pitched 255 innings with 96 walks in 1961.)

Can you imagine pitchers getting the call at the top of the shoulder now?

Anyway, that’s all anecdotal, of course, and SSS. But I like what it implies.

Eric Ripps
9 years ago

Hey George,

Fascinating article from top to bottom! Is there any way that you might be willing / able to reproduce the charts with the data you used to generate the linear weights component runs per game graph?

Stan
9 years ago

Hi George,
Very nice though as a Cardinal fan I always thought of 1968 as the year of Bob Gibson though some feel that there may have been some luck involved as well as official scorers being kinder to him http://www.beyondtheboxscore.com/2006/7/7/14244/53010

Marc Schneider
9 years ago
Reply to  Stan

The old joke is that “Bob Gibson is the luckiest pitcher in baseball; everytime he pitches, the other teams doesn’t score any runs.”

Eric
9 years ago

I could be wrong, its happened before, but I thought the height of the mound changed from 12 inches to 10 inches between 1968 and 1969, NOT 15 inches to 10. I don’t think it was that drastic.

Marc Schneider
9 years ago

Koufax had a circulatory problem in a finger in 1962, causing him to miss the latter part of the season.

As the author suggests, while the strike zone was clearly larger in 1963, I doubt the umpires were actually calling many strikes on shoulder-high pitches.

My understanding is that the strike zone was increased in the wake of Roger Maris’ 61 home run season from fear that too many home runs were “cheapening” the game. Although 1968 is called the Year of the Pitcher, hitting was gradually decreasing throughout the period after changing the strike zone, culminating in 1968. Which, in my view, makes Willie Mays’ 1965 season, in which he hit 52 home runs, truly amazing.

JKB
9 years ago

Great article – it captures really well the impact of the shift from 1962-1963, and from 1968-1969. I do Pre-Post intervention tests and one of the things we do to control variability over time is compare the average effect of the intervention time period to a pre-intervention period and post-intervention period.

You have perfect data for this kind of test: Pre-Intervention = 1957-1962, Intervention = 1963-1968, Post-Intervention = 1969-1974. Comparing the average of the metrics you describe above during those time periods will give a more stable estimate of the impact.

There is also a great case study for this technique as well: Bob Gibson’s career spanned most of those years.