I’m down here at the SABR Analytics Conference, having a blast and learning some cool new things. There was a really nice presentation yesterday on how batters and pitchers perform based on how deep in the strike zone a pitch travels before the bat makes contact. Based on what I saw, we may have found a way to quantify “sneaky fast.” Nice job.
Meanwhile, there’s a rumor in the air that the Fangraphs and Baseball Reference folks are going to sit down here and hash out a common replacement level for their Wins Above Replacement stats. There are lots of differences between the FG and BRef versions of WAR, but replacement level seems like the easiest one to iron out.
I guess that’s a good thing, though I’ve never been bothered by having different replacement levels. In fact, I think it’s a good thing. I guess I’m in the minority on that, however. So, let’s entertain the question of the day: what should the new consensual replacement level be?
You know, figuring out the proper replacement level would be relatively easy if talent were evenly distributed across all teams. The 26th guy in Arizona would be worth about the same as the 26th guy in Chicago. But baseball doesn’t work that way. For a couple of decades, the 26th guy on the Yankees was much better than the 26th guy on the A’s. So, I think the basic question is, which team should you use to set your replacement level? The best? The worst? The average?
While you ponder that, I want to thank all of you who voted for my article, The most critical at-bat, in the SABR historical analysis and commentary competition. That article got the most votes of all, and I’ve got a nice plaque to show for it. The fun part was that I was sitting right next to Bill James when they made the announcement. There’s some bragging rights there.
Back to the subject. I’m saying this off the top of my head—always a dangerous thing to do—but I believe that Fangraphs’ replacement level is around .280, while Baseball Reference’s is around .320. That can make a difference of perhaps 20 wins over an entire career, and that’s a lot. In a way, it doesn’t matter, because both systems are internally consistent. But, again, this seems to bother some folks.
An argument I’ve heard for a lower replacement level, such as Fangraphs’, is that a higher level creates more players with career totals below replacement level. The feeling is that, if a player managed to have a lengthy career as a regular, even if it was spent with the A’s in the 1950’s, he was probably better than replacement level.
I’m not so sure about that. Remember, WAR is an estimate of value, an imperfect one. There are margins of error in them numbers, particularly for those players who provide the most value with their gloves. You’d expect some significant negative variances even over full careers. Plus, there are players who have managed to survive on reputation more than performance. One of the fun aspects of sabermetrics is trying to determine who those players were. We’ll never be 100% sure who they were, but I think we can say with certainty that they existed.
So, are there going to be some players who racked up decent careers, perhaps enhanced by unsupported reputations, with negative variances in our WAR estimates, who hung on as regulars for really bad teams for a while?
Yes, I’m quite sure there were. How many were there? No clue. The smart guys at Fangraphs and Baseball Reference will try to figure that out. All I can say is, don’t be afraid of those negative WAR totals.