A natural issue when it comes to designing your league is how to value players’ stats—should home runs be worth more than wins, and so forth. This problem is apparent in points leagues where you have to designate how many points each chosen stat gets: five points per home run, 10 points per win, etc…. Actually, though, rotisserie leagues just face a special case of this problem, where each stat that is chosen gets the some (non-linear) weight while all those not chosen get zero points.
There are many candidate systems to “optimally” value stats. Better systems will lead to more competitive and/or more baseball-like (I called this verisimilitude in a previous article) fantasy leagues. In this article, I will discuss a few. For convenience sake, I will mostly discuss them within the context of a points league, since it makes examples easier.
The most popular method for valuing stats is most likely the “ad hoc method”. Basically, the league chooses points in the hope that they get the outcome they desired. If giving five points for a win and three points for a save turns out to overvalue relievers relative to some desired outcome (too many teams stock their roster and start mostly closers and Brad Lidge is drafted before Johan Santana), then the league adjusts the points the following year. In a league like this, if singles, doubles and triples are each valued, then it is pretty common to for them to have one, two and three point values respectively.
The good thing about this system is that league has an at least rough sense of what it is trying to accomplish: they kind of know what they want. The bad thing is that the league has a fairly stumbling and drawn out way of getting what they want.
A different method that was recently mooted in the comments section of some previous THT Fantasy Focus articles is to use a “sabermetrics” based value system. For instance, you could use some sort of marginal win share (MWS) for each stat and then use relative MWS (ratios, that is) to assign relative points. For example, you could compute the relative value of one run scored by some baseline player for some baseline team in terms of wins for that team (or fractions of wins) to the same value of an RBI or .01 points of batting average.
The good thing about this system is it has an elegant and precise way of computing things. The bad thing is it isn’t clear what this system is trying to accomplish—it isn’t clear how it makes a league more competitive or baseball-like because it doesn’t take in to account the other parameters of your league. For example, who and what should those baseline player and teams be? To decide this, things could get absurdly complex, since the average player or the replacement player in MLB is not the same as the average starting or replace player in your fantasy league.
Instead, I suggest a sort of middle ground. Take a candidate value assignment like five points for a win and three for a save. Get the stats for last year’s pitchers (or use some projections for this year like Marcel). Compute the point value of each pitcher and then sort by value. Eyeball it. If there are 10 closers in the top 15 pitchers by value, then perhaps you want to adjust things. Do the same for pitchers versus batters and so forth.
This system isn’t as numerical ambitious as the “sabermetric” method and it doesn’t correct for risk, but it should help you avoid really big mistakes in valuation. It is also fairly easy to personalize the system to your league. For instance, if you have a 12 team league with two catcher spots, you can take a look at the 24th most valuable catcher. If it is a catcher with 60 at-bats and five steals, rather than a .265 batting catcher with 30 runs and 250 at-bats, then perhaps you want to adjust things. If you decide that the best balance between competitiveness and verisimilitude is that rosters should have about seven starting pitchers and five relievers, then you can take a look at the top 144 pitchers by point value in the league would be and see how many of them are relievers, and so on.