Is the AL Really Superior? (Part 1)

Conventional wisdom has it that the American League is (by far) the better league. This notion is based largely on anecdotal evidence (and win-loss records in interleague play, the All-Star games, and the World Series), as far as I am aware. I don’t know of any study that has rigorously looked at whether there is indeed a difference between the overall quality of the respective leagues, be it in hitting, pitching, defense (and base running, in the interest in being comprehensive), or some combination thereof. Until now. We will be limiting the discussion/analysis to pitching and offense, although pitching necessarily includes defense.

Win-loss records in interleague play (as of June 28)

2006: AL leads 128 to 75.

That is quite an impressive record, even in “only” 203 games. One standard deviation due to chance alone is almost seven wins. Given that the American League is 26.5 wins over .500, it is nearly impossible that the two leagues are currently equal (or that the NL is better) in overall talent. End of article, right? Not quite. What about in previous years?

2005: AL leads 136 to 106

OK, now we are at 264 wins and 181 losses in favor of the American League over the last two years. That is still very impressive. It is more than four standard deviations above .500, making it still nearly impossible for the talent in the NL and the AL to be the same (or in favor of the NL) over the last two years.

Finally, let’s take a look at the entire history of interleague play:

Year	AL Wins 	NL Wins
1997	97	117
1998	114	110
1999	116	135
2000	136	115
2001	132	120
2002	123	129
2003	115	137
2004	126	125
2005	136	116
2006	128	75
Totals	1,225	1,179

Over the last three years (2004-2006), the AL leads in interleague play by a count of 390 to 316, a much more modest, but still fairly impressive, advantage (around 1.5 standard deviations above .500). Based on these numbers alone, however, the results of interleague play from 2004 to the present time (or from any year before 2004 to the present, for that matter), we cannot conclude with any certainty (at least in a statistical sense) that the AL has been the better league overall. Of course, it is possible that there was more parity in 04, and that in 05 and 06, the AL has become quite the better league. That may even be a likely scenario. The data certainly suggest so.

Does either league have an inherent advantage in home games?

Is it possible that the American League simply has an inherent advantage in games played in AL home parks and that the talent is actually around the same in both leagues in recent years (which would be a little odd since both leagues have about the same number of wins since 1997)? In an excellent article by John Jarvis presented at the SABR-31 convention in 2001, Jarvis’ data from the 1997 through 2002 interleague seasons show that the NL designated hitters are not nearly as good as their AL counterparts in interleague games in the AL parks. The difference is about 75 points in batting average plus slugging average (BPS?), or about .1 runs per game, using a linear weights formula. That translates to around 3.5 extra wins since 2004, which would cut the American League advantage from 390-316 (55.2%) to 386.5-319.5 (54.7%), if we were to nullify the AL’s extra HFA.

However, Jarvis also points out that NL teams have an advantage in pitcher-hitting when playing interleague games in NL parks. That advantage is around 50 points in batting average plus slugging average, which correspond to .06 runs per game (pitchers obviously don’t get as many plate appearances as designated hitters), or around two wins in those 706 interleague games since 2004. So the net advantage (due to the differential effect of the designated hitter and pitcher-hitting) to the AL is only 1.5 wins in four years, or less than half a win a year, not a whole lot to get excited about. (Note: Because of the inherent advantage to both leagues in their home parks, from 1998 to 2005, the NL has had a 54.7% winning percentage at home; in the AL, that number is 54.2%, compared to a 53.6% win percentage at home for all teams, including interleague games

How else can we determine the relative quality of the two leagues?

Shift in talent

Given that the AL does indeed have a very nice win-loss advantage over the last two years, but a relatively modest one over the last three years, let’s look at the individual players who have switched leagues during this time period to see if there has been a positive net shift in talent in recent years from the NL to the AL, as the pattern of interleague wins and losses suggests.

Of course, whether there has or hasn’t been a talent shift, we still won’t really know which league is better after the shift, unless we know what our baseline was before the shift, so clearly a more detailed analysis is going to be necessary to determine the exact relative quality of the leagues at any point in time. (A talent shift, absent any other information, does suggest that the league receiving more net-positive talent is better after the shift; however, there is not a whole lot of certainty in that assumption.)

Let’s start by looking at the “shift in offensive talent” from 2000 to 2006. The way we are going to do that is simple. For all batters who have switched from the NL to the AL, we will compile their collective linear weights rate (in and relative to the NL of course) in the year before the switch (this will represent their true talent, at least with respect to the other players in their original league), and apply that to the number of PA they had in the year after the switch. We will do the same thing for players who went from the AL to the NL. The difference will be the shift in talent. From that number, we can determine how many total runs and runs per game each league gained or lost on the average.

For example, if 20 below-average players from the NL who had a collective lwts (relative to the other players in the NL) of -10 runs per 500 PA went to the AL, and 20 average players (a collective lwts of zero), again, relative to their own league, went from the AL to the NL, it would be clear that the AL just got weaker and that the NL just got stronger. The NL lost 20 below-average players and received 20 average players in return. The AL lost 20 average players and got 20 below-average players in return.

There is one caveat to our conclusion above that one league got weaker and the other league got stronger. (If any or all of the next three paragraphs are confusing or hard to follow, don’t worry about it. They are not essential to the analysis.) What if the average player in the AL were equivalent to a -10 per 500 plate appearances (from now on, when I refer to “runs,” it is assumed that we are talking about “runs per 500 PA” unless otherwise noted) player in the NL; in other words, the NL was a stronger offensive league in the first place? If that were the case, then we actually swapped equivalent players and neither league got weaker or stronger after the switch. Keep that in mind as you read on.

One other thing we have to do is to determine how to replace a player in a league when more players (or more precisely, more PA) leave then arrive. For example, what if 20 -10 players went from the NL to the AL, but the NL received nothing in return from the AL? We now have 20 new players in the AL who are below-average (at least they were when they were in the NL). Presumably, they are going to replace some other below-average players in the AL. We also have a “hole” of 20 players in the NL, which is going to be filled, again, presumably, by 20 below-average bench or minor league players.

What we are going to do is assume that a “hole” will be filled by a phantom player who is a little above replacement level, which will be arbitrarily defined as a player with -10 lwts per 500 PA. The reason we are not going to fill a “hole” with a true replacement-level player (about -15 runs) is that we have to assume that if more players leave a league then arrive, there is probably an excess of talent available within that league or their minor league clubs to replace the deficit.

Transfer of offensive talent from one league to the other from 2000 to 2006


Year	Gainer	Runs   	Runs per
    	      	Gained   Game Gained
99-00	AL	133	.059
00-01	AL	143	.063
01-02	NL	141	.058
02-03	NL	86	.021
03-04	AL	198	.087
04-05	AL	119	.052
05-06*	AL	113	.050
Total	AL	479	.211
*Prorated to a full 2006 season.
Some problems with this method

As we mentioned earlier, in order to exactly compute the transfer of runs from one league to another, we would first have to establish the relative offensive strength of both leagues before the switch, and then adjust each player’s lwts to some common baseline rather than in relation to his own league. For example, in 2005 to 2006, 34 players went from the AL to the NL. The average lwts for those 34 players in 2005 was -15.2 runs. Thirty-eight players went from the NL to the AL, and their average lwts was -12.3. (By the way, a particularly bad crop of players switched leagues from 2005 to 2006.) It would seem that almost three net-positive runs per player went from the NL to the AL, further weakening the NL. However, it is likely that the AL was already offensively superior in 2005 by several runs per player. So it is also likely that the -15.2 players that went from the AL to the NL were actually equivalent (or better) to the -12.3 players that went from the NL to the AL. Thus no shift in offensive talent probably occurred between 2005 and 2006.

Another problem with the data in the above chart is that the first year (year before the switch) collective lwts of players who switched leagues (such as the -12.3 and the -15.2 in 2005) does not represent their true talent levels. That is because these players are selectively sampled, by virtue of the fact that they indeed switched leagues. In general, players who switch leagues tend to have had below-average performance in the year before the switch (which is often why they are released or traded). In order to determine their true talent levels, we have to regress their pre-switch performance. Since they tend to be part-time players as well, their performance has to be regressed quite a bit. For example, the 72 players who switched leagues from 2005 to 2006 had a collective lwts rate of -13.6 runs per 500 PA. Their true talent level is not really that bad. We would expect them to hit a lot better than that in 2006, and in fact, they have so far, to the tune of around -3.5 runs per 500 PA. So if one league transfers a group of players who were -15 runs per 500 in 2005 to the other league and the other league transfers an average-hitting crop of players, you are not really transferring as many runs as you might think (the -15 per 500 PA times the number of PA) from one league to another. In fact, you are transferring considerably less.

The best way to avoid this selective sampling problem and the need for regression is to look at each group’s (who switched leagues) performance in the year after the switch. This should indeed represent their true talent levels, since the selective sampling issue (the fact that they switched leagues) should not impact their post-switch performance. So let’s look at the same chart above, this time using the post-switch lwts as our basis for determining any possible transfer of net talent from one league to the other. In fact, we’ll add this data to the previous chart and expand the previous data as well.

This time, we will compute the number of runs transferred from one league to another using each group’s post-switch lwts rather than their pre-switch lwts. In other words, if 20 players went from the NL to the AL and were -10 players in the AL after the switch, and 20 players went from the AL to the NL and were average (zero lwts) players in the NL, after the switch, we will presume that the AL got worse (they got 20 -10 players) and the NL got better (they got 20 average players). Again, this assumes that the NL and the AL were equivalent before the switch.

Year 	Players  Runs    Runs     Players  Runs    Runs    Which   How     Runs   
     	to AL    Before  After    to NL    Before  After   League  Many    per
     	         Switch  Switch            Switch  Switch  Gained  Runs    Game
99-00	41       -3.1    -8.3     36       -8.7    -2.2    NL      172     .066
00-01	31       -2.3    -5.0     40       -13.0   -7.5    AL      2       .001
01-02	34       -7.8    -13.0    39       -4.9    -3.6    NL      232     .090
02-03	39       -7.1    -4.9     39       -3.7    -1.4    NL      82      .032
03-04	61       -1.8    -2.6     56       -4.8    -3.2    AL      60      .026
04-05	39       -3.2    -8.5     39       -7.9    0.0     NL      199     .077
05-06	38       -12.3   -5.7     34       -15.2   -1.0    NL      198     .077

Totals   283      -5.1    -6.5     283      -7.9    -2.8    NL      821     .345

There is a lot of information in the above chart. As I mentioned before, and as you can see from columns three and six in the chart, the average player who switches leagues (via a trade, free-agent signing, a player who is released and then re-signed, etc.) is a below-average player. He is also a part-timer – the average number of PA per player in the year after the switch is only around 322. Another pertinent piece of information is that the average age of a player who switches leagues is around 31 after the switch. At that age, players tend to lose around 1.5 runs per 500 PA in offense per year, due to the effects of aging. We’ll see why that is important in just a few minutes.

If we look at the last three columns, we can see that the runs transferred results are completely different than those in the previous chart. The first time we did the calculations, we used the players’ pre-switch lwts to determine the transfer of talent; the second time (the chart above), we used the post-switch lwts. In the first chart, using the first method, we found that 479 offensive runs were transferred from the NL to the AL from 1999 to 2006. Using the second method, and in the second chart, the one above, we determined that 821 runs were transferred from the AL to the NL. Clearly one method or the other (or both) is wrong.

The reason why we can’t use just one method or the other (pre- or post-switch lwts) to determine how much offensive talent is being transferred from one league to the other is that at any point in time, there may be (and in fact there usually is) an imbalance in league talent, and thus when we compare one league’s lwts to the other league’s lwts, we are comparing apples and oranges. In order to determine whether any talent was transferred from one league to another, first we have to determine what the relative talent was between the two leagues, just prior to the transfer.

Come back tomorrow for Part 2, in which I determine the relative quality of offensive talent in any given year.

Print Friendly
« Previous: Business of Baseball Report
Next: Ten Things I Didn’t Know Last Week »

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current day month ye@r *