How good is NCAA Division 2?

Once your interest in college baseball goes beyond the “Road to Omaha” stage, there’s seemingly no end to the complexity. First, there are the nearly 300 Division 1 teams, most of which have at least a player or two that major league scouts are tracking. If you go deeper still, you may never emerge at all.

Heedless of the dangers, deeper we must go. After all, big-league teams cast a wide net. In last year’s amateur draft, clubs took about 700 Division 1 athletes, but also snagged 66 players from Division 2 schools, 20 from Division 3 programs, and another 30 from the NAIA. Pedro Alvarez they are not, but these aren’t levels of play that statisticians can afford to ignore.

All about Division 2

In general, D-2 schools are smaller than D-1 schools, and they are much more constrained in their ability to hand out scholarships. They are able to give partial scholarships to some student athletes, but these limitations ensure that prime college-bound talent heads to D-1 instead.

Division 2 baseball is made up of about 235 schools across 26 conferences. In practice, D-2 is not as uniform as D-1: Many southern schools have been playing games since late January, while some northeastern schools won’t get their season underway until mid-March.

To get a better grasp on the level of play in Division 2, I built a database consisting of the final scores of every game played in 2008 by a D-2 team. (Okay—I missed a few, but don’t blame me, blame the schools that didn’t post their results.) From there, I calculated Pythagorean records, strength of schedule, and ultimately power ratings for every D-2 team. Using the methods I described in my last article to rate D-1 conferences, I did the same for D-2 leagues. We’ll see those results in a bit.

How Division 2 compares

Among the 6,000 or so games that D-2 teams played last year, 143 were against Division 1 opponents. It’s not as large a sample as I would like, but it’s a decent start. In those 143 games, the D-2 teams were outscored 1,275 to 837, for a D-1 pythagorean winning percentage of .684.

It’s not quite that simple, though. Only 42 of the 143 inter-division games were home games for Division 2 teams. I estimate NCAA home-field advantage at .550, meaning that the D-1 winning percentage gets knocked down 17 points, to .667. Still a clear advantage, but at least we aren’t penalizing D-2 athletes for those long bus rides.

Let’s assume for a moment that all the D-1 and D-2 teams in those games were average for their division. Then we can apply the log5 method and calculate that a .500-quality Division 2 team would, if it moved to Division 1, become a .333-quality team at the higher level. In more practical terms, an average D-2 team would fit inconspicuously into the SWAC or the MEAC, the lowest-quality D-1 conferences.

We don’t have to settle for that assumption, though. Using the team strength calculations I outlined a couple of weeks ago, we can figure out just how good the teams are that played in those 143 inter-division games. It turns out that the D-2 teams were close to the middle of their respective pack, at .478. Their D-1 opponents, however, were of lower relative quality; they played .355-quality baseball within their own division.

Running the numbers with the additional data, it turns out that those D-2 teams—a bunch of .333-quality opponents against .355-quality schools—look like they’d manage a mere .231 winning percentage against middle-of-the-pack D-1 opponents.

Applying reason, logic and guesswork

That’s a lot of numbers, and the results don’t entirely jibe with my intuitions.

While it seems awfully low at first blush, a .231 winning percentage isn’t at all implausible. Thirteen D-1 teams were worse than that in 2008, including North Carolina Central, playing its first year in Division 1. The other newbie, Presbyterian, was not dramatically better, finishing at .300.

A more relevant smell test is what this conclusion says about the top end of Division 2. Mount Olive was both the national champion and the strongest team on paper, boasting an astonishing .885 schedule-adjusted Pythagorean winning percentage. To win 88.5 percent of its games against average (.231 in D-1) schools, it would appear to be a .698-quality team by a D-1 yardstick.

A minute ago, I thought that .231 seemed really low. This implication, though, is interesting in the opposite direction. A .698-quality team ranks within the top 25 in Division 1, just above Irvine and Missouri, just below South Carolina and Long Beach State. So, is that plausible?

In the amateur draft last year, MLB teams selected five players from Mount Olive. Granted, they were low picks—the average of the five draft positions was No. 1,112. To take those four comparable D-1 teams:

  • Irvine: six picks (average position: 754)

  • Missouri: six picks (position: 579)
  • South Carolina: seven picks (average position: 441)
  • Long Beach State: 11 picks (average position: 238—wow!)

It shouldn’t come as a surprise that Division 1 schools have the high-round studs—after all, someone with the talent of an Aaron Crow (Missouri) or a Justin Smoak (South Carolina) should be going where they’ll get the most press and scholarship money. The number and quality of picks suggests that calling Mount Olive a .698-quality team is a stretch, but not a completely outlandish one.

Since Mount Olive was so dominant last year, it’s worth digging a bit deeper. (I’ll share some more detailed rankings of top D-2 teams below.) If we assume that an average D-2 team is a .231-quality D-1 team, that means seven or eight D-2 teams would be average or better by Division 1 standards. Another way of looking at that: The quality of play in the D-2 championship would be roughly equal to that in a mid-range (or slightly better) D-1 conference, such as the Big Ten or the Sun Belt. I’m willing to accept that.

Less reason and logic, more guesswork

Our 143-game sample is not only on the small side, it’s also probably biased against the weaker teams. If you’ll indulge me for a moment, I’ll quote myself, on the subject of why the weaker divisions in D-1 look so bad in non-conference games against the stronger counterparts:

Perhaps the most serious problem is that, after the first few weeks of the season, most schedules involve conference games on the weekend, with non-conference games relegated to Tuesday and Wednesday. Since teams throw their best pitchers over the weekend, non-conference games are a test of depth—something high-profile teams have, and others don’t. Using this second method is kind of like ranking MLB teams based on their performance in games started by their fourth and fifth starting pitchers. It measures something, but it doesn’t tell the whole story.

The stronger the team, the deeper the pitching staff. Top D-1 programs have up-and-coming freshmen and sophomore starters pitching on Tuesdays and Wednesdays. Everybody else sends their right fielder to the mound.

In other words, we’re ranking these D-2 teams using data from their worst games. Looking back at the numbers for the weakest D-1 conferences, note that in non-conference (generally Tuesday and Wednesday) games, SWAC teams played .129-quality baseball; taking their entire season into account, they come out at .317. In part, that’s to be expected—when they are playing outside of the conference, they must be playing better teams, because there aren’t very many teams that are worse. But some of that is due to the shallowness of the worst teams.

So, there is at least some justification to claim that the average D-2 team is better than a .231-quality D-1 team. The D-1 MEAC, for instance, was .209-quality against other conferences, .333 overall. Calling the average D-2 team anywhere close to .330 violates just about any smell test—that would make Mount Olive a .790+ team, slotting it at No. 4 in Division 1, right behind Florida State. It’s possible that, since our sample relies heavily on the performance of low-end D-1 teams, the D-1 schools just aren’t that much deeper than their D-2 opponents—at least not in a way that causes their Tuesday/Wednesday results to be skewed.

The “Could Mount Olive possibly be that good?” test provides a useful counterweight against any effort to suggest that D-2 is better than .231. If we nudge up the average to .250, that would make Mount Olive .719 by D-1 standards, better than all but 12 D-1 teams, right between Coastal Carolina and Oral Roberts. I suppose the case could be made, but if this were a video lecture, you’d see a grimace on my face that strongly hints to the contrary.

Ranking Division 2

To my surprise, the dispersion of team quality is similar in Division 2 to that of D-1. While Mount Olive’s .885 adjusted winning percentage far outstrips the best D-1 has to offer (last year, that was North Carolina, at .824), the number of teams better than .700 is roughly similar—26 (11 percent) in D-2 to 22 (7 percent) in D-1. The other tail is more striking, with 39 (17 percent) of D-2 teams under .300 relative to their division, against 17 (6 percent) in D-1. It seems reasonable to me that, at lower levels, there are more schools that participate but do not recruit and compete aggressively.

Interestingly, conference strength is dispersed less in D-2 than in D-1. The difference isn’t substantial, and it appears to be because power isn’t concentrated in a handful of conferences, as it is in Division 1.

Since, for the most part, the teams at the high end of Division 2 have the athletes who turn pro, let’s look at the top D-2 schools. In the chart, I’ve included two estimates of the team’s quality relative to Division 1. “D1(200)” reflects a pessimistic read on D-2 quality (as if the average D-2 team would be a .200-quality D-1 team), while “D1(250)” is an optimistic translation.

Win%+    D1(200)  D1(250)  School
0.885    0.658    0.719    Mount Olive
0.834    0.557    0.627    Anderson (IN)
0.800    0.499    0.571    Delta State
0.798    0.497    0.569    Southern Arkansas
0.773    0.460    0.532    Columbus State
0.772    0.458    0.530    Tampa
0.769    0.455    0.527    Franklin Pierce
0.767    0.451    0.523    Catawba
0.765    0.449    0.521    West Alabama
0.764    0.447    0.519    Sonoma State
0.753    0.433    0.505    Tusculum
0.749    0.427    0.499    Southern Connecticut State
0.744    0.421    0.492    West Virginia State
0.739    0.414    0.486    Wayne State (NE)
0.736    0.410    0.481    Ouachita Baptist
0.734    0.409    0.480    Nebraska Omaha
0.732    0.406    0.477    Central Missouri
0.722    0.393    0.464    Albany State
0.719    0.390    0.460    Abilene Christian
0.718    0.389    0.459    Southeastern Oklahoma State

This display makes the .250 level appear more tenable. If we can accept that Mount Olive is really that good, it suggests that 11 or 12 total D-2 schools (rather than as few as three or four) would be better than average against the next level of competition. I don’t have any problem with that.

Next up, to complement the conference rating I presented for Division 1 two weeks ago, here’s a similar conference rating for Division 2. As I did with D-2, there are two measures of conference strength. “Non-Conf” is based solely on non-conference games. “All Games,” you might guess, is based on all games played by all teams in the conference. I’ve also included the pessimistic and optimistic translations to Division 1.

Non-Conf  All Games  D1(200)   D1(250)   Conference
0.709     0.601      0.274     0.334     Sunshine State Conference
0.637     0.566      0.246     0.303     South Atlantic Conference
0.640     0.565      0.245     0.302     Peach Belt Conference
0.555     0.555      0.237     0.293     Carolinas-Virginia
0.656     0.536      0.224     0.278     Gulf South Conference
0.599     0.534      0.223     0.277     California Collegiate Athletic Association
0.409     0.530      0.220     0.273     Heartland Conference
0.472     0.520      0.213     0.265     Pacific West Conference
0.532     0.516      0.210     0.262     Lone Star Conference
0.558     0.502      0.201     0.252     No. Central Intercollegiate Athletic Conference
0.501     0.496      0.197     0.247     Pennsylvania State Athletic Conference
0.571     0.495      0.197     0.246     Rocky Mountain Athletic Conference
0.536     0.494      0.196     0.245     Great Lakes Intercollegiate Athletic Conference
0.554     0.493      0.195     0.245     Conference Carolinas
0.544     0.490      0.194     0.242     Great Northwest Athletic Conference
0.563     0.489      0.193     0.242     Great Lakes Valley Conference
0.502     0.483      0.189     0.237     Northern Sun Intercollegiate Conference
0.499     0.470      0.181     0.228     Northeast-10 Conference
0.496     0.465      0.179     0.225     Mid-America Intercollegiate Athletics Association
0.381     0.440      0.164     0.208     Central Athletic Collegiate Conference
0.322     0.427      0.157     0.199     West Virginia Intercollegiate Athletic Conference
0.304     0.391      0.138     0.176     East Coast Conference
0.240     0.380      0.133     0.170     Central Intercollegiate Athletic Association
0.132     0.367      0.127     0.162     Southern Intercol. Ath. Conf.
0.301     0.346      0.117     0.150     Central Atlantic Collegiate Conference
0.372     0.329      0.109     0.140     Independents

Since the top teams are not bunched in two or three conferences, no entire conference seems comparable to any but the worst leagues in Division 1. For instance, recall that the worst two conferences, the MEAC and the SWAC, are of .333- and .317-quality.

Interestingly, the Sunshine State conference has only one team—Tampa—on the leaderboard above it. It stands atop the rankings because eight of its nine members are better than .550 relative to Division 2. Of the eight teams in Conference Carolinas (which includes Mount Olive), only three are better than .430.

Finishing up

This is only one way to compare divisions. Translations could be attempted at the level of player statistics by looking at how D-2 players fare in the pros or in collegiate summer leagues. In either case, however, we’d be limited by both the other variables (transitioning to the pros and different types of bats) and by small sample sizes, since D-1 players far outnumber D-2 athletes in both contexts.

Meanwhile, I’m looking forward to compiling the same data from the 2009 season to see if additional results confirm or complicate the conclusions I’ve reached here. This is hardly a rallying cry for teams to send their best scouts to the small schools of West Virginia, but it should provide some general guidance when evaluating standout players from the top Division 2 programs.

Print Friendly
« Previous: Consistency meter: Carlos Lee
Next: The injury zone »

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current day month ye@r *