The season is almost over, but there is still something in 2013 baseball to look forward to once the final pitch is throw in a day or two. No, I don’t mean a thorough executive review of the obstruction rule. (I did say “look forward to.”) I mean awards season. Gold Gloves, Manager of the Year, Comeback Players, Rookies of the Year, and atop the heap, the Cy Young and Most Valuable Player Awards.

These awards will honor the best players in the game today, and be the focus of lively discussion of who actually deserves those recognitions. That’s probably not a euphemism this year, though in 2012, with the clash between partisans of Miguel Cabrera and Mike Trout for the American League MVP Award, the term did cover some scorching arguments. Remarkably, the two are in the thick of MVP discussions again this year, but with far less vitriol, or disagreement over who is going to get the trophy.

It is the controversy that erupted last year that led me to do my own study of how MVP, and Cy Young, voting tracks with the various statistical measures used to gauge the excellence of the players in award contention. This study will appear in *The Hardball Times Baseball Annual 2014*, my first work for that series. This article today is something a little different: using what I learned in that study to forecast who is going to win the major awards this year.

### Giving part of the game away

To understand the numbers and statistical categories I’m going to be throwing at you soon, I need to give a distillation of what I did in the original study. However, I cannot share everything that’s going to be in the *THT Annual*. First, it would bloat this article beyond the bounds of reasonable patience. (Some would say I do this with regularity already. Quiet, you.) Second, part of my purpose here is to entice you to support The Hardball Times by buying the* Annual*. I can give out some tastes of what I’ve done, but ladling out the whole thing defeats the purpose.

Let me get instead to what I can tell you. I looked at the years 1996 to 2012, taking data in a broad swath of metrics for the top three finishers—whom I wound up calling the “medalists”—in each vote for MVP and Cy Young. I trimmed pitchers out of the MVP voting, because of the impossibility of making statistical comparisons, and made a couple other adjustments for ties in the voting.

I marked which medalist had the best result in each category, and where he finished in the voting. I made two separate tabulations of the results, but I’ll be using only one in this article, the simple count of how often the award winner led his fellow medalists in the category. I took the percentage this produced, and plotted its place on the continuum between random chance at 33.3 percent and perfect positive correlation at 100 percent (or occasionally with negative correlation at 0 percent). This gave me a number between 0 and 1 (or 0 and negative 1) reminiscent of those that measure positive and negative correlation, though this isn’t exactly the same thing.

I’ll give an example. Over 34 MVP races, the award winner has led the medalists in weighted on-base average (wOBA) 17 times. This is obviously 50 percent. On the scale between random chance and perfect correlation (33.3 percent to 100 percent), it comes out to a +.250 correlation score. This falls roughly in the middle of the 18 metrics I looked at for the MVP Award.

What I’ll be doing in this article is looking at who wins in the categories I covered, totaling up the respective correlation scores (behind a bit of a curtain), and seeing who has the strongest case for winning this year’s MVP and Cy Young Awards respective of those correlations. For the Cy Young, I use 15 metrics, some of them closely related to each other (I will handle the distortions that can cause when they arise). For the MVP, I used 13 season-long metrics, and five from a shorter time frame: I’ll explain that when it comes up. Again, some MVP stats are interrelated, and will need combing out.

Remember, correlation is not causation. It doesn’t necessarily follow that award voters are looking at various statistics and giving differing weights to whoever leads in those categories. It may well happen with the simpler measures, but I doubt the baseball writers assigned to do the voting are drilling down into xFIP- and wRC+ to make their calls. Instead of saying that winning a category raises a player’s award chances, it’s better to say that the writers tend to vote for players who lead the other contenders in that category, leaving cause and effect to one side.

That makes doing these predictions a bit more speculative than inherently comes with the nature of predictions. Think of this as a test flight of something promising but uncertain. I could end up coming through like Chuck Yeager, or Steve Austin. You folks on the ground should have fun finding out which.

The system, by its nature, requires that I have the top three finishers identified before I can complete the analysis. This is an obvious problem, as by the time we learn who the medalists were, we’ll also know who won gold. I found a work-around. The writers at Baseball Prospectus filled out their own MVP and Cy Young ballots a few weeks ago, and showed the voting down to sixth place. These ballots aren’t sure to be a match for the official ones, but they should be close.

I took the top four finishers in B-P’s MVP voting, and the top five for the Cy Young, and ran the process with them, narrowing things down to a top three by eliminating the trailing players in a first pass through the system. The different numbers come partly because the AL MVP vote had a tie for fifth, and I didn’t care to run six players through the paces when I was trying to get to three. This method produced one moderate surprise with the AL Cy Young, but largely ended up confirming conventional wisdom about the medalists.

### Cy Young Award, National League

The Cy Young race in the NL was competitive for a while between Los Angeles’ Clayton Kershaw and New York’s Matt Harvey, until Harvey’s elbow came unstrung in August, requiring Tommy John surgery. After that, the punditry slid over to the question of whether Kershaw’s season could net him the MVP Award as well as the Cy. I won’t be covering that matter, but I will examine whether there is cause for Kershaw to be less certain of the Cy Young Award than he likely is.

Baseball Prospectus’ writers put Kershaw at the top of their ballot, with Harvey second and Adam Wainwright of the Cardinals third. Jose Fernandez of Miami finished fourth, and Cliff Lee of the Phillies fifth. They all made an appearance somewhere on the leader board.

(A very quick rundown of some metrics for those who might be new to them: WAR is Wins Above Replacement, calculated by two similar but distinct methods. FIP is Fielding Independent Pitching. oOPS is on-base plus slugging for the opponents a pitcher pitched against; wOBA is Weighted On-Base Average, an advanced metric invented by Tom Tango, and o-wOBA is opponents’ wOBA. A metric with a minus sign has been adjusted for league and home park.)

National League Cy Young Award Metric Leader Metric Leader Metric Leader bWAR Kershaw fWAR Kershaw Strikeouts Kershaw ERA Kershaw FIP Harvey K/9IP Fernandez ERA- Kershaw FIP- Harvey xFIP- Harvey oOPS Kershaw o-wOBA Kershaw Wins Wainwright BB/9IP Lee HR/9IP Harvey IP Wainwright

Cliff Lee and Jose Fernandez lead just one category each. Walks per nine innings has only a slight correlation with Cy Young wins, so Lee is no real factor. Strikeouts per nine innings has a stronger link, but not in the top five, so we can exclude Fernandez as well. This leaves a fairly clear top three: Kershaw, Harvey and Wainwright, the same top three from the actual voting. That works out neatly.

Kershaw leads the ERA categories, while Harvey is ahead in the ERA-scaled FIP measures. (To briefly explain them: FIP measures walks, strikeouts, and home runs. xFIP trades homers for fly balls, on the assumption that pitchers don’t affect the proportion of flies that go for four bases. The minus sign is like the plus for offensive categories, normalizing for league and park.) Interestingly, the FIPs have a closer correlation to Cy Young wins than ERA does. Voters aren’t just using the traditional stats to fill out their ballots.

That means Harvey’s FIPs are worth more as an indicator than Kershaw’s ERAs. Kershaw, though, overcomes that elsewhere. The WAR measures favor him, and are at least as indicative as the FIPs. (bWAR, for Baseball-Reference, has a stronger link than fWAR, for FanGraphs.) He also has opponents’ OPS and wOBA on his side, though their correspondence is lower. Harvey can claim the lowest home runs per nine innings ratio, but wouldn’t want to: that correlation is actually somewhat negative.

Total strikeouts belong to Kershaw, while strikeout rate, once the also-ran Fernandez is removed, goes to Harvey. Kershaw gets the better of that exchange. In Lee’s absence, walks per nine innings fall to Wainwright. The low correlation there doesn’t boost his case much. Wins does better, a touch higher than ERA, while innings pitched is somewhat better still, though trailing the WARs, FIPs, and strikeout measures. Wainwright finishes a pretty clear third.

The fight at the top is closer, more so than one might think when one pitcher is bandied about as an MVP contender and the other is not. If we add up the correlation scores for each category, it looks like this:

Kershaw: 2.168 Harvey: 1.485 Wainwright: 0.618

This may not be a perfectly reflective representation, because several of the statistics I examine are closely related to each other. bWAR and fWAR measure roughly the same thing; ERA and ERA- do likewise, as do the FIPs and xFIP, and arguably opponents’ OPS and wOBA. Each of those groups goes to one pitcher, meaning their ratings get multiplied by the near-repetition. Trim each group down to one rating (the highest of the group in each case), and the numbers look different, if not decisively so:

Kershaw: 1.462 Harvey: 0.739 Wainwright: 0.618

Neither tallying method bumps the expected winner off his perch: the second look actually widens the margin at the top. The system predicts Clayton Kershaw will win the 2013 National League Cy Young Award.

### Cy Young Award, American League

Max Scherzer, the only 20-game winner in the majors this year, is widely considered the AL favorite for the Cy Young. The brains of Baseball Prospectus lean that way also. They fill out the top five in the voting with, in order, Yu Darvish of the Rangers, Felix Hernandez from Seattle, Chris Sale of the White Sox, and Scherzer’s Detroit teammate Anibal Sanchez. This group gives Scherzer a serious run for the money.

American League Cy Young Award Metric Leader Metric Leader Metric Leader bWAR Sale fWAR Scherzer Strikeouts Darvish ERA Sanchez FIP Sanchez K/9IP Darvish ERA- Sanchez FIP- Sanchez xFIP- Hernandez oOPS Scherzer o-wOBA Scherzer Wins Scherzer BB/9IP Sale HR/9IP Sanchez IP Sale/Scherzer (tie)

With one category win, King Felix falls out of serious contention. Nobody else does us that favor. The remaining four are bunched, their preliminary correlation scores running from a high of 1.107 to a low of 0.758. Bringing up the rear there is Chris Sale, which I confess comes as a relief. Award voters may be getting more stat-savvy, but it’s still tough to imagine them giving a top-three finish to a starting pitcher who finished three games below .500, no matter what unwritten rules they bent on Felix’s behalf in 2010.

This reshuffles the deck. Walks per nine innings now goes to Scherzer, along with bWAR. Yu Darvish picks up Hernandez’s top spot in xFIP-. This is a net positive for Scherzer: bWAR is a better indicator than xFIP-, while walks per nine as noted above doesn’t move the needle much.

It also knocks a surprise leader off the pace. Anibal Sanchez, fifth in the B-P voting, narrowly led the pack when there were five contenders. He got no benefit when Sale and Fernandez were filtered out, however, and that drops him all the way to bronze among the medalists. The unadjusted correlation totals for the three:

Scherzer: 1.668 Darvish: 1.327 Sanchez: 1.107

If we compress the numbers in related categories as we did above, things change yet again. Scherzer and Sanchez both have signficant overlaps: bWAR/fWAR and oOPS/o-wOBA for Max, ERA/ERA- and FIP/FIP- for Anibal. Darvish has a more tenuous relationship between total strikeouts and strikeout rate. The link is not as obvious as when calculating WAR by somewhat different methods or adjusting ERA for ballpark. (For one thing, K/9 gives relievers a chance to show their stuff, against the three pure counting stats in the set that are starters-only.) Whether I compress these two categories ends up being pretty decisive, as you can see below:

Darvish: 1.327 (0.945) Scherzer: 1.146 Sanchez: 0.563

The number in parentheses is if we discount Darvish’s K and K/9 leads. (Working backward, it also tells you the K/9 correlation figure is a pretty high .382, which is a close sixth out of the 15 categories.) I considered splitting the difference and discounting half of the lower metric’s correlation figure: this would have left Scherzer a tiny 0.010 ahead.

I am torn on the matter, having no real guidance on which method is best. The closest I can come is to observe that by only one specific method does Darvish come out ahead of Scherzer. Every other way of toting up the numbers gives Scherzer the advantage, even if only paper-thin.

This is easily the closest of the races, and produces my least confidence, even if it gives us the consensus choice at the end. The AL Cy Young Award looks to be going to Max Scherzer this year.

### Most Valuable Player Award, National League

As noted earlier, I am leaving Clayton Kershaw’s chance at the MVP Award out of this equation: I am considering only the position players. The top rank of contenders, according to the pundits and B-P’s voting, consists of Pittsburgh’s Andrew McCutchen and Arizona’s Paul Goldschmidt. Matt Carpenter of the Cardinals and Joey Votto of the Reds filled out B-P’s top four.

The top 13 metrics given below are for the entire season. The bottom five are for a specific month. Again to cover possibly unfamiliar metrics: wRC is Weighted Runs Created, similar to wOBA and to Bill James‘ old Runs Created stat. The plus signs indicate adjustment for league and park. ISO is isolated slugging, SLG minus AVG.

National League MVP Award Metric Leader Metric Leader Metric Leader bWAR McCutchen fWAR McCutchen wOBA Goldschmidt AVG Carpenter OBP Votto SLG Goldschmidt OPS Goldschmidt OPS+ Goldschmidt ISO Goldschmidt Runs Carpenter RBI Goldschmidt HR Votto wRC+ Goldschmidt/Votto (tie) OPS/Mo McCutchen OPS+/Mo McCutchen ISO/Mo Goldschmidt wOBA/Mo McCutchen wRC+/Mo McCutchen

Votto leads the contenders in two full-season categories, on-base percentage and home runs. The latter is ironic, as the knock against him this year was that he was being too passive and not swinging for the fences enough. On-base, though, is easily the least correlative of the triple-slash stats to MVP success, and homers are only slightly better. We’re probably safe in excluding Votto from top-three consideration—and if we do so, the two categories where he led go instead to McCutchen.

Carpenter’s two leading categories are a bit more promising, or at least one of them is. MVP voters still like batting average, even if they’re somewhat more fond of slugging. Beating McCutchen by a single point probably lessens the effect this year. Runs scored, on the other hand, has almost no positive correlation to MVP voting. Votto and Carpenter don’t have quality in the categories they own to overcome quantity. It’s a two-player race between Goldschmidt and McCutchen.

Goldschmidt holds the early lead on full-season stats, though one element of it is misleading. It used to be that leading the league in runs batted in was the short cut to MVP glory. That’s no longer the case. Over the 17 years of my survey, none of the metrics I examined (with the exception of one month of opponents’ wOBA for the Cy Young) has a more negative correlation to awards than RBIs for the MVP. Writers have gotten over the RBI as the prime indicator of batting excellence.

McCutchen has a full-season lead only in WAR categories among the top four contenders. He does better once you drop the fourth-place player, whether that’s Votto or Carpenter (I’m making it Votto, probably following the way the actual voting will go), but he doesn’t overtake Goldschmidt. Slugging is a stronger indicator than WAR, and much stronger than home runs and on-base. Even with the RBI penalty, Goldschmidt maintains a lead—until we get to the Magic Month.

In five statistical categories, I looked at monthly breakdowns as well as the full-season figures. There were two months where the correspondences with the voting were generally stronger than for the entire year, and one of those months was simply dominant, producing most of the best correlation scores in the survey. If you are a top-shelf contender for the MVP Award, and you out-perform your rivals offensively in this month, when writers appear to be paying special attention, odds are you have punched your ticket.

This is where marketing instincts enforce my coyness. I can’t tell you *everything* that’s in my Hardball Times Annual article, and this result is a pretty big and pretty surprising something. I’m going to decline to name the month here. You can always guess, but don’t assume you’ll be right. Alternatively, using the leader information above and especially below, you can probably work it out for yourself, at least so far as to prove your probable first guess wrong.

In the Magic Month, McCutchen made his claim. He led four of the five categories, and the one that fell to Goldschmidt was the only one that has a merely middling correlation. Throw these in with the full-season figures, and McCutchen leapfrogs Goldschmidt.

McCutchen: 2.290 Goldschmidt: 1.481 Carpenter: 0.369

Once again, we have some groups of stats measuring largely the same things. Telescoping the groups affects both front-runners. Goldschmidt would have wOBA/wRC+, OPS/OPS+, and arguably slugging/isolated slugging, compressed. McCutchen has the WAR measures, the monthly OPS/OPS+, and the monthly wOBA/wRC+ brought down to one each. Doing this gives these scores:

McCutchen: 1.293 Goldschmidt: 0.885 (1.069) Carpenter: 0.369

Even if SLG and ISO are considered separate, as with the parenthetical figure, Goldschmidt still does not close the gap. McCutchen’s success in the Magic Month makes him the system’s choice for the 2013 National League MVP. Unless Clayton Kershaw swoops in and takes it, but as I’ve said, that is beyond the scope of this system.

### Most Valuable Player Award, American League

And we come around to the race that started it all, time-shifted forward one year. Miguel Cabrera of the Tigers and Mike Trout of the Angels are in full contention, joined this year by home run crown winner Chris Davis of Baltimore. Oakland’s Josh Donaldson got Baseball Prospectus’ nod as number three ahead of Davis, but he won’t be lasting long here.

American League MVP Award Metric Leader Metric Leader Metric Leader bWAR Trout fWAR Trout wOBA Cabrera AVG Cabrera OBP Cabrera SLG Cabrera OPS Cabrera OPS+ Cabrera ISO Davis Runs Trout RBI Davis HR Davis wRC+ Cabrera OPS/Mo Cabrera OPS+/Mo Cabrera ISO/Mo Cabrera wOBA/Mo Cabrera wRC+/Mo Cabrera

Donaldson did not hit the board anywhere, conveniently letting us narrow the race directly to three. Chris Davis leads in three metrics, two of which are only mildly correlative and the other, as mentioned before, having a negative correlation. Mike Trout leads in WAR measures, and also in runs scored, but this last has a correspondence just barely above zero. Trout found out last year how much run-scoring and Wins Above Replacement guarantee you in the MVP race. This year will repeat the lesson.

Miguel Cabrera has a wide base of full-season accomplishment, sweeping the triple-slash numbers along with the omnibus offensive measures of OPS, wOBA and wRC+. Backing that up is his domination in the Magic Month. There’s not much room to argue for his competitors against this flood of numbers.

Helping clinch the case is how voters for the American League MVP like to pick a winner from a playoff team. They do this for both leagues, but the trend is stronger for the AL. Cabrera’s Tigers won their division, again, while Trout’s Angels and Davis’ Orioles missed October. (Consideration for Josh Donaldson may have sprung from his being the clear best performer on the division-topping A’s, though he has a pretty good WAR case too.)

Historically, it’s tough for a player to win a second straight MVP without improving his performance in the second year. Barry Bonds‘ four straight MVPs in 2001-4 may have broken that block, but his was an extraordinary case. On first glance, Cabrera going from the Triple Crown to leading just one of the Crown categories, batting average, works against him, and gives some daylight to Trout and Davis.

A closer look revives his case. His triple-slash numbers rose by 18, 49, and 30 points in a year when league-wide numbers were mostly steady. His WAR numbers, averaged between the two sources, rose by three-tenths of a point (while Trout’s fell by about two-thirds of a point, in a season where he played a month longer than in 2012). His OPS+ rose 23 points, from a Triple Crown season (while Trout’s rose eight). Cabrera has a better relative case for the award—and the award, after all, is relative to all the other players—than he did a year ago, when he took it handily.

That’s the clear expectation again. I won’t even grind through the numbers: they’re just too strong. Miguel Cabrera is going back-to-back with MVP Awards, to the surprise of nobody if to the disappointment of a few.

### Wrap-up

This hasn’t been as dramatic a review as it could have been. In four major awards categories, the method went with the anecdotal favorite all four times. In one sense, this is satisfying: it’s not coming up with peculiar results. The consensus of baseball commentators fits with the consensus of statistics.

In another sense, it’s a mild disappointment, because the system doesn’t get a stiff test. Not one of the races is too close to call with the punditry, even if one of them ended up close in my methodology. If Yu Darvish pulls the upset, it would be a mark in favor of being conservative in compressing similar categories into one. It would also speak to the predictive power of winning the strikeout title, the most strongly correlative metric of all those studied for either award.

It will probably take a couple more years of awards to show whether the predictive powers I’ve been testing out are as strong as hoped. Of course, those will also provide a few more years of statistics to feed into the system, altering the underlying numbers. Good: a system that doesn’t adjust to new facts isn’t worth much, and good sabermetricians will take all the added data they can get.

For now, we’ve got some predictions, and in a couple weeks we’ll know how good they really were. Pretty soon, we will also have the THT 2014 Annual, where all those annoying little gaps in my presentation will get filled in. The other 96 percent of the book should be even better, too.

Steve said...

Votto did not have more HR than Goldschmidt.

Fenderbelly said...

Very interesting Shane, nice piece.

Dr. Doom said...

Steve is correct. Goldschmidt led the NL in HR. Perhaps you added things right, but the presentation is incorrect.

Also, I think this is a pretty neat thing. I’m excited for the full article in the Annual. I do have one question, though: how well does this method back-correlate? Of the 34 races you discussed above, how many times did your system pop out the “correct” answer? I don’t need any details, but how well did the system work when you try it backwards?

Shane Tourtellotte said...

Dr. Doom: Actually, I didn’t do any back-correlation. The THT Annual article dealt with finding the correspondences, figuring out which stats match up best and worst with winning the awards. Using it as a predictive, or matching up old votes to the whole panoply of metrics, came very late in the process. I suggested putting predictions for this year’s awards in the Annual article to Dave Studeman, but he thought that I could do it on the website instead, which I have done.

Back-checking the old races is an intriguing idea, though. It might help me figure out how to handle those related stats that perhaps should be telescoped into one measure. (I can use help with that, as you’ll see in a moment.) I’m already into stat-digging for my next fortnightly article, but after that, I may be coming back to this.

As to Votto, Goldschmidt, and homers: okay, I don’t know how the heck I goofed on that. Goldschmidt did win the home run crown. This affects Votto only as it more thoroughly chases him from the top three. As to the race with McCutchen, it brings Goldschmidt closer on the two main measurements, and if you count slugging and isolated slugging as wholly separate, Goldschmidt inches ahead by 1.187 to 1.175.

I was inclined

notto make SLG and ISO separate, and included that way of counting in order to show that McCutchen was ahead even if you tilted everything against him. Well, he’s no longer ahead withallthat tilting, but I have to stick by him as my MVP prediction. He does have the added advantage of having been on a playoff team, while Goldschmidt’s D-Backs ended their year in September. There is a bias in NL MVP voting for players on post-season teams, not as strong as for the AL, but there. That counts to give McCutchen a bit of his cushion back.Dr. Doom said...

Shane,

That’s all very interesting. I’m surprised you didn’t back-check! The first thing I thought was, “well, you used the 1996-2012 data… how well did it actually work as predictive for those years?” In particular, I wondered about McGwire-Sosa-Bonds in 1998, Kent-Bonds in 2000, Ivan Rodriguez in 1999, either of Juan Gonzalez’s MVPs, Justin Morneau in 2006, and a handful of others. My guess is that, for many of them (if not even MOST of them), math won’t yield a right answer. And that’s because (my opinion about the matter is) the writers, as a sort of hive-mind, collectively decide on a narrative that will carry the day for the season in question, but not for a different season. I’ve tried to do similar things (though never with as many metrics as you’re using; bravo for that!), and when I back-check over previous years, it works less than half of the time, and I end up starting over. I’ve just given up on this whole project. But I’m glad to see someone is still giving it a shot!

I was also wondering if/how you were factoring in team performance. It seems like there should be a way of doing it. Either with a simple binary (a y/n for playoffs and/or division winning, which adds a numerical bonus to the player in question) or using the team’s winning percentage as some sort of multiplier in the end. Or a combination of the two. Anyway, it seems to be historically important (it affected 5 of the last 6 MVP races, that I recall).

Thanks for the response!

steven said...

I think yall are FOS.