Ranking Organizations

If you want to rely on statistical measures, how do you rank a farm system? It’s tough enough to compare two disparate prospects, let alone measure two entire organizations against each other. For John Burnson’s Graphical Player 2007, I tried to overcome some of those difficulties and compiled a set of stat-based organizational rankings.

I want to emphasize that the method I’m about to describe is only a first step: Along the way, I’ll point out the places I’d like to improve on the process and how I might do that. The value here isn’t the exact numbers, but the framework for thinking about how to compare minor league systems.

Despite the flaws in the system, I think you’ll find that the resulting rankings pass the sniff test. Of course, we could spend all day arguing about something like this, but I find it much more interesting to zero in on the places where the rankings don’t seem right and see if the system itself is misleading.

The Stats

To keep things simple, I used a single stat for every pitcher and hitter in the minors. OPS was a natural choice for hitters, and to keep the graphs (which we’ll get to a bit later) on the same scale, I used OPS against for pitchers. To enable comparisons of everyone from Low-A to Triple-A on the same scale, I used equivalent (MLE) OPS, which is adjusted for park, league and level.

I took into account only two other variables for each player: their playing time and their age. It’s foolish to compare a 21-year-old in the Florida State League to a 25-year-old in the International League, and, as you’ll see in a moment, my ranking approach reflects that.

I considered playing time mainly to differentiate relief pitchers from starters. A team with a bunch of great relief prospects doesn’t have as strong a system as a team with several good starting prospects. (You could argue that adjusting for playing time isn’t sufficient, especially since differences in OPS against will be more extreme for relievers.)

Rating Players

First, I found averages and distributions for every age throughout the minors. In other words, I could compare every player to the average production from his age group. I included all players with reasonable playing time between the ages of 19 and 27—younger than 19, there are only a few players at low-A or above in all of baseball, and older than 27, players don’t really count as prospects anymore. (You could set the age limit lower, of course. It doesn’t end up mattering very much.)

Then, for every player in the minors, I determined whether he was above average for his age group, and whether he was in the 90th percentile or higher for his age group. (I also determined whether he was in the 75th percentile or higher, which helps make the graphs more interesting, but doesn’t affect the rankings.)

Rating Organizations

From there, I could count how many 50th-percentile guys and 90th-percentile guys each organization had. Convert those into percents, and you can compare each team’s “studs” (90th-percentile guys) against each other, and do the same with their “depth” (percent of 50th-percentile guys). All of this is rather arbitrary, but it’s meant to include both the number of top-tier prospects and the overall quality of the system.

For both batters and pitchers, I weighted “studs” twice as heavily as “depth,” which I think reflects general thinking about the value of a farm system. Further, I weighted batting prospects 50% more heavily than pitching prospects to include some adjustment for the unpredictability of young arms.

The most important caveat about these rankings is that they aren’t particularly forward-looking. The whole system uses only 2006 stats. That means that prospects who have been traded (including in mid-season) are partially counted toward their old team’s score, and the rankings include players who have graduated to the big leagues.

Further, it only considers players who logged substantial time at Low-A or higher. Thus, most 2006 draftees aren’t considered, nor is anybody else who spent the year in rookie or short-season ball. Thus, teams get next to no credit for a strong 2006 draft.

The Good Stuff, Part One

Since this whole system was designed for The Graphical Player, the idea was to create something that would make for an interesting pictorial representation. Here’s what we put together for the Dodgers:

The table on the left indicates how strong the team is by the measure of “studs” and “depth.” The Hitters and Pitchers graphs give you an at-a-glance idea of the strength of each part of the system. The dark dots (with accompanying labels) are 90th-percentile players, the dark gray dots are 75th-percentile guys, the light gray dots are 50th-percentilers and the white dots represent everybody else.

The Good Stuff, Part Two

According to this system, the Diamondbacks have the greatest concentration of offensive “studs.” They have 11 guys who are 90th-percentile or better against their age group; in other words, nearly 25% of their system is in the top 10% of minor leaguers. The Mets also stand out, with nine studs.

Every team has at least one offensive stud, but the Phillies, Blue Jays and Orioles each only had one.

A Hardball Times Update
Goodbye for now.

There wasn’t nearly so wide a gap between the haves and the have-nots when it came to offensive depth, but the Dodgers stick out: More than 80% of their hitters were better than average for their age group. The second-best team was below 70% by the same measure.

On the pitching side, the Cubs and Twins led with nine studs. The Indians and Blue Jays were close behind with eight. As with hitters, each team had at least one, but the Royals and the Nationals only had that lone stud—in each case a reliever.

No team had the pitching depth equivalent to that of the Dodgers’ batsmen, but the Indians were on top with more than 70% of their pitchers above average. The Royals scraped the bottom of the barrel, with barely a quarter of their pitchers in the top 50%.

The Good Stuff, Part Three

Here are the top five farm systems for hitters:

  1. Los Angeles Dodgers
  2. Arizona Diamondbacks (trailing the Dodgers by less than 1%)
  3. Colorado Rockies
  4. Cleveland Indians
  5. San Diego Padres

The top five for pitchers:

  1. Minnesota Twins
  2. Cleveland Indians
  3. Chicago Cubs
  4. Philadelphia Phillies
  5. Milwaukee Brewers

Putting it all together, here’s how each team’s farm system ranks:

  1. Los Angeles Dodgers
  2. Cleveland Indians
  3. Arizona Diamondbacks
  4. Chicago Cubs
  5. Milwaukee Brewers
  6. Minnesota Twins
  7. Detroit Tigers
  8. New York Mets
  9. Colorado Rockies
  10. Philadelphia Phillies
  11. New York Yankees
  12. Houston Astros
  13. Boston Red Sox
  14. Tampa Bay Devil Rays
  15. San Diego Padres
  16. Toronto Blue Jays
  17. Chicago White Sox
  18. Pittsburgh Pirates
  19. Seattle Mariners
  20. Cincinnati Reds
  21. Los Angeles Angels
  22. Oakland A’s
  23. St. Louis Cardinals
  24. San Francisco Giants
  25. Atlanta Braves
  26. Florida Marlins
  27. Texas Rangers
  28. Kansas City Royals
  29. Baltimore Orioles
  30. Washington Nationals

For more detailed rankings, including graphical representations of all 30 systems, you’ll have to check out the book.

Moving Forward

In describing my method, I’ve mentioned several ways in which it is lacking: Notably, it counts players who have left, it doesn’t count players below Low-A and it uses arbitrary distinctions such as the 50th and 90th percentiles to generate ranking points.

Some of those are solvable problems, but if I were to pick one thing to improve upon for next year’s rankings (fortunately for me, I don’t have to limit myself), I’d expand the statistical inputs used for each player. I like the idea of comparing against each player’s age group, but using only OPS and OPS against doesn’t give you the whole picture.

For hitters, it would be better to use something like wOBA or OPS adjusted for BABIP. For pitchers, it seems crucial to include strikeout rate, and probably walk rate as well. Especially for minor league relievers, prospect status only sometimes relates to traditional performance measures like OPS.

To correct for some of the arbitrariness of the percentile divisions I used, it would be possible for 99th-percentile performers, such as Philip Hughes, to be worth more toward a team’s ranking than a 91st-percentile guy, like many relievers who had good years. Then, there would be no need for the division between “studs” and “depth” which are, in practice, closely related.

I’m sure that if you’ve gotten this far, you get the idea: The method described above is useful as a general approach, and provides a snapshot of each farm system’s statistical prowess. With the improvements I’ve outlined in the last few paragraphs, I imagine moving it toward more of an aggregate projection system—still fairly back-of-the-envelope, as such things go, but a way to account for more of the tools we use in evaluating prospects and their organizations.

References & Resources
So much you see above (including permission to republish the Dodgers graph) is thanks to John Burnson, editor of Graphical Player 2007. He made the pretty pictures, and many of the initial ideas for the project were his as well.


Comments are closed.