I’ve wondered for awhile if, for expected record purposes, blowouts should have their runs capped. If any team has given us reason to consider this in more depth, it’s the Pirates.
Right now, the Brewers and Pirates both are 13-16. The Brewers have scored and allowed the same amount of runs, while the Pirates have a deficit of 77 runs, leading to an expected record of 8-21 (so says Baseball-Reference). Of course, the issue with both run differentials is that they are the direct result of their clashes from April 20 to 22, in which the Brewers outscored the Pirates 36-1 (8-1, 8-0, 20-0). Take those games out of the equation and the teams look even, which probably is more appropriate to their true abilities.
In my understanding of the theory, there should be an in-game run differential after which runs for Pythagorean record purposes should be capped. Maybe it’s just me, but it’s a bit superficial to say that the Pirates’ ability to lose by 20 runs has much impact on their ability to win or lose other games. Even if the Pirates give up 1000 runs this year, April 22′s ignominy would be two percent of their runs allowed on the season, which is quite a lot for a game that only contributes one loss.
Since I’m interested in the topic but rather simplistic in my understanding of it, I leave it to the faithful readers. Is there a point at which the effect of blowouts ought to be minimized in calculating Pythagorean records? Is Pittsburgh’s situation so rare that we ought not tinker with the formula? Debate it how you want, though I suggest a cheese dip for the afterparty.