The Federal League of 1914-15 was the last serious challenge to the two-league structure that comes down to us today. At that point, the supremacy of the National and (especially) American Leagues was hardly so secure.
There are plenty of fascinating historical topics having to do with the Federal challenge and its ultimate resolution, but what interests me most is what a sudden shock of massive expansion did to the talent level in the “major” leagues.
In 1913, there were 16 major league teams. In 1914, there were 24. As population grows and coaching improves, the talent pool increases as well, but it doesn’t increase by 50 percent in one season!
So how much did the overall talent level suffer? There’s plenty of evidence that the Federal League was never quite on par with the AL and NL, but how much worse was it? And how much did the established leagues lose when the Federals raided their rosters?
Predicting the future of the past
There are a lot of ways to estimate the relative talent level in a league. In comparing a wide range of league-seasons, some analysts like to use population levels, which is an interesting way to consider the impact of integration on major league baseball of the 1950s and 1960s.
To handle more specific problems, analysts tend to look at the year-to-year statistics of players who switched leagues, or even in-season totals of players who were traded from one league to another. For instance, if Lance Berkman OPS’s .750 for the Yankees after posting an .808 mark for the Astros, that would be one data point supporting the theory that the AL is tougher than the NL.
Virtually any method in either of these categories lacks either specificity (in the case of population measures) or a sufficient sample (in the case of many league-switching studies). That isn’t a knock on analysts who attempt such studies; it’s the nature of the beast.
I’d like to try another approach: using retrospective projections. Using an algorithm to predict performance, we can “forecast” how any player would perform in 1914, 1941, or 1961. A system like Marcel (which I’m using for the present exercise) allows us to use three years of data, plus a regression component, to estimate each player’s true talent level going into any given season.
That’s a mouthful, so let’s look at an example. One of the more prominent players to jump to the Federal League was Hal Chase, a man always ready to jump at a few dollars. In 1911, ’12, and ’13, Chase posted OPS numbers of .762, .671, .660 in about 570 plate appearances per season.
If Marcel had existed back in the winter of 1913-14, it would’ve predicted that the soon-to-be 31-year-old Chase would post a .681 OPS—near the middle of his past three years’ performances, with a bit of a downward nudge for being on the wrong side of 30.
As it turned out, Chase spent about half of the 1914 season with Buffalo of the Federal League. In Buffalo, his OPS was much better than expected: .870. In the other half of the season, with the American League White Sox, his performance was .708, much closer to the projected estimate. If the only information we had at our disposal was the experience of Hal Chase, we’d conclude that the Federal League was very, very hitter-friendly, either due to tiny parks or lower-quality pitching.
Of course, we have data on far more Federals than just Hal Chase. After we take out pitchers and players who had never appeared in the majors before joining the FL, we’re left with about 70 guys. The vast majority of them performed better than projections.
Once we weight each hitter’s experience by playing time both before and after the switch, the average gain (measured by wOBA, a great overall measure for offensive value) works out to about 8 percent. That’s huge. Representative of the league-wide trend was Mickey Doolan. His projection was .235/.286/.302, but his performance in the FL was .245/.311/.323. Not very good in either league, but considerably better with the upstarts.
Still, we’re left without the entire picture. We can estimate a batter moving from the AL/NL to the FL would generate about 8 percent more offense. But why? We might hypothesize that weaker pitching is the answer, but what if all the Federal League parks were tiny? If the entire league moved from, say, Miller Park to Coors Field, offense would certainly go up, but it wouldn’t say anything about the level of competition.
Fortuantely, we have data on pitchers, too. To keep things simple, I’m not going to worry about separating pitching and defense; I’ll stick with Run Average to group the two together.
Once again, we have a decent sample to work with: about 40 pitchers with AL/NL projections who played in the FL. Again weighting for playing time, their RA’s improved by about 7 percent. In other words, pitchers who left for the Federal League generally posted better numbers in their new home, suggesting that the level of play was markedly lower in the renegade organization.
(Remember, the 7 percent difference encompasses pitching and defense. If we assume that defensive prowess was relatively low in the Federal League, it may be that the pitchers themselves improved by more than 7 percent.)
What about the situation back on the home front? The AL and NL lost some stars, but not a huge percentage of them. It seems implausible that the established leagues suffered so much as to make them equal with the Federals, but at the same time, could they have really maintained their level of play against the Federal raids?
The answer is a bit surprising. Using the same techniques as I described above, AL and NL hitters performed 4 percent worse than projected. AL pitchers beat projections by 2 percent, while NL pitchers outperformed forecasts by 3 percent.
Let’s take a closer look at what these changes are telling us. If hitters performed worse than projected, that would seem to suggest that the level of competition—at least the average quality of pitching—has gone up. The fact that pitchers beat their projections hints at a change in environment, but the changes were about equal for each league, and no clubs switched parks between the 1913 and 1914 seasons.
If the environment is not the cause, perhaps Federal raids were lopsided and the established leagues were able to keep most of their quality pitchers while losing some better hitters. Or clubs were better at scouting and acquiring quality minor league hurlers than picking up hitters from the farm, making life tougher on veteran hitters than on veteran pitchers.
Whatever the reason, the unexpected conclusion is that, despite a quasi-expansion, the established leagues may have been slightly more competitive in 1914 than in 1913.
In fact, the average level of play across 24 teams was not much worse than the level of play across 16 teams the previous season. Taking all three leagues together, batters hit about 1.5 percent worse than expected, while pitching and defense was about 3.5 percent better than expected. That’s a net decline in the talent level, but only a tiny one—small enough to give us a different view on the effect of any increase in league size.