A few weeks ago in this space, we posted an interesting piece of research conducted by a friend of mine named Andy Moursund. He had examined his personal collection of vintage Sporting News issues, and compiled the results of Spring Training games from 1945 through 1962 that pitted American League teams against National League teams. Moursund’s finding was quite interesting: through that 18-season period, NL teams compiled a 143-win advantage in those league-vs.-league exhibition contests.
In the Comments section from that THT Live post, Sean Smith posed an entirely relevant question: just what sort of win/loss percentage did that advantage represent? Moursund didnt know, because he hadn’t toted up the results in that way. But he responded to Smith’s question and said that he would undertake a complete re-count of the box scores, and get back to us with the winning percentage finding.
And here it is.
Moursund’s additional calculations resulted in a more thorough accounting than his original work. The result of that is a revised figure for the National League’s total win advantage from 1945-62; it wasn’t actually 143 wins, but instead 102. But he’s now able to put that more-exactly-counted figure of 102 wins within a total-number-of-games context.
The results are as follows: through those 18 springs, American League teams faced National League teams in a grand total of 2,142 exhibition games, an average of 119 per year. And in those 2,142 contests, NL teams won 1,122 times against 1,020 losses, for a winning percentage of .524.
To put that into persepective, a .524 percentage is equivalent to a team going 85-77 in the current-day 162-game regular season schedule, or 81-73 in the 154-game schedule that was in effect for most of that period. In a single-season context, a record of 85-77 or 81-73 is good, but hardly great. It isn’t the sort of record that wins championships.
But that context isn’t the most appropriate way to consider these results. A sample of 2,142 games is vastly greater than a sample of 162 or 154 games. I’ll leave the precise calculation to those more well-versed in statistics than me (that isn’t saying much), but I’ll assert that I retain enough memory of the stats courses I sweated through in graduate school, just-a-few decades ago, that in a sample of 2,142 trials, in which the null hypothesis would predict a result of somewhere close to 1,071-and-1,071 (a .500 record), the likelihood that a result of 1,122-and-1,020 (a .524-.476 record) came about by random chance is very, very small.
Statisticians, please feel invited to correct this assertion.
And in any case: thanks to Sean Smith for posing the question, and thanks to Andy Moursund for answering!