I get where John Sickels is coming from. I echo some of his sentiments, some of his hesitancy to embrace the newest of the new in today’s metrics. I feel a little bit over my skis in today’s sabermetric community.
I may write for a sabermetrics-focused website, but like Sickels my training is distinctly non-math related. I work as a broadcast engineer for a television station, My college degree is in Electronic Media, which most schools call Mass Media. I spent many more days in college sitting behind an audio board, editing video, or trying to sound less wooden than I normally do when introducing a smooth jazz track for the student radio station than I ever did assembling and sorting spreadsheets. When my wife recently took a statistics class that is a requirement for the college degree she is pursuing, I was only a small help.
You will not find me building my own new metric. It isn’t in my skill set. Fortunately it is not the reason Dave Studeman keeps me around. My role here is to write relatively lightweight, fun to read articles, mostly highlighting the game’s weird outliers and poking fun the metrics we still see listed in the paper and have been around since US presidents wore full beards. It is low hanging fruit, but I enjoy it. I started writing the Awards because it was an article that I wanted to read and nobody was doing it at that time.
So with all that being said, it probably comes as little surprise that when a website rolls out a new statistic, I generally follow along pretty well during the introduction and I can generally glean a bit from the conclusion, but get a little glazed over in the middle section because I lack the math chops to grasp all of the methodology. As we have collectively gotten more sophisticated in our metrics, my outlook has slowly morphed to reflect what I feel about most medical information. I know what the kidney’s job is, but I would look awfully stupid if you asked me for specifics about the mechanisms that filter toxins from the blood. And I think at a certain level, I think that is fine with everybody involved.
I don’t begrudge people who want to improve on existing metrics and factor in new knowledge to produce more precision. As we learn and as more tools become available, we should refine our methods of measuring the game. We can’t stand still. I support those efforts and the websites who work on this effort both rhetorically and monetarily.
That being said, I ask for two things from those who build new metrics. One, have patience with those who share your sensibilities, but not necessarily your skill set. And two, when you introduce a new metric or write a piece of analysis based thereon, make it a point to go out of your way to write for both the math majors and laypersons alike. Do your best to explain in simple terms how you are improving on previous efforts, how you are looking at a problem in a different way. Make sure everybody can grasp the intro and the conclusion, even if the middle section is a bit of a blur. I would wager that how well these concerns are addressed has a significant effect on how much traction a specific metric holds the attention of the saber-friendly public.