Tournament Teams Get Better and Better

I was curious to see whether the passage of time has changed how the Tournament Committee makes its seeding decisions.  Does an 0.60 RPI have the same meaning in 2018 as it did at the beginning of the 21st century?  Do teams with an 0.60 in 2018 get about the same seeding as they would have received in 2000?

To address this question I simply added a term that measures the number of years since 2000 to my standard model.  The effect is significant and uniform across all three types of conferences.  This chart presents the estimated relationship between RPI and seeding for major conference teams setting the elapsed time variable to its extreme values of one (2001) and 17 (2017).

The difference is substantial.  Since 2000, the Committee has reduced the seedings they grant by about 1.3 ranks.1 Another way to look at this is to ask how much greater does a team’s RPI need to be in 2017 to get the same seeding it would have gotten in 2001. In 2001, a six seed would have required an RPI of 0.602; in 2017, that floor had been raised to 0.613. In the RPI rankings for 2018, that small numerical difference in RPIs represents the gap between Arizona (0.612; 18th in RPI) and Loyola-Chicago (0.6027; 26th).


1I calculate this effect by multiplying the estimated coefficient 0.0787 ranks/year by 17 years. Letting this effect vary by type of conference added no explanatory power (p>.75 no difference). A test of whether the slope of the relationship between RPI and seeding varied over time, by including the interaction of RPI and time, was similarly unproductive.  The gap in seedings did not change over time producing the parallel lines in the graph above.

Bracketology 2018: My Top 32 Teams

Three teams, Virginia, Xavier and Villanova, have placed themselves quite far ahead of the pack this year.  At the moment, Duke would be favored to win the last remaining top seed, but North Carolina could overtake Duke were UNC to win the ACC championship this weekend.  Winning a major championship is worth about one full rank, which would move North Carolina into the fourth slot and relegate Duke to the second line.  A victory for Clemson in the ACC tournament would move it up to a two seed along with Kansas and North Carolina.

Kansas seems locked into the second line regardless of what happens in the other major conference tourneys.  Michigan’s #11 ranking already incorporates the bonus it received for winning the Big Ten championship last weekend, so it seems unlikely the Committee would move it further ahead of fellow Big Ten member Purdue, based on the latter’s overall season record.

The SEC Tournament could provide some additional drama as well.  Only one of Auburn and Tennessee can win the SEC championship.  That team will likely be placed on the second line, with the other seeded one rank behind.

Only three of these teams, Nevada, Gonzaga and Rhode Island, do not play in a major conference.  Gonzaga’s score includes its “customary” two-seed bonus; it might or might not receive that again this year.  The ACC, Big East, Big Ten, and SEC conferences have roughly equal numbers of teams on this list.  The Big 12 and Pac 12 have just one each.

Does the Tournament Committee Play Favorites?

RPI, Conference Memberships, and Major Championships Determine Seedings

Gonzaga, Cincinnati, and Connecticut have been especially favored.

Selection Sunday for the 2018 NCAA Men’s College Basketball Tournament is just days away.  In past years I’ve published reports on how the Tournament Committee gives an advantage to teams in major and mid-major conferences and grants a seeding bonus to major champions.  Recent conversations with a friend raised the issue of whether the Committee somehow plays favorites by seeding some conferences or teams above or below others after adjusting for objective ability.  This report takes a look at such possible favoritism by the Committee.

As before I am using my database of 1,088 tournament appearances by 241 different basketball programs covering the 2001 through 2017 Tournaments.  The results appear in this companion article.

Once again we see that the Tournament Committee looks more favorably on teams from the major and mid-major conferences.  A team with an 0.60 RPI would be seeded seventh if it plays in a major conference, eighth if it hails from a mid-major, but eleventh if it comes from any other conference.

The Tournament Committee likes to stress that it looks at a team’s whole record when making seeding decisions and does not weight the end-of-season conference tournaments all that highly.  That appears to be true for all the conferences but the six “majors,” the Atlantic Coast, Big 12, Big East, Big Ten, Southeastern and Pac 12 Conferences.  Champions from the major conferences have been seeded an average of one rank higher than other major conference teams with identical RPI scores.

Does the Committee Play Favorites?

Most theories of favoritism are usually based on the assumption than the NCAA and its partners in the television industry have a clear incentive to structure the Tournament to drive ratings.  That creates a pressure to feature marquee teams like Duke or Kansas who will reliably draw a nationwide audience.  In the brutal, single-elimination format of the Tournament, the Committee has strong incentives to seed the most popular teams higher and improve their chances of survival. But is there any other evidence of favoritism when it comes to specific teams or conferences?

I searched for favoritism by comparing the actual seedings awarded a team with the seedings I predict based on RPI, conference membership, and major championships.  I began with the five teams that have appeared in every Tournament since 2000 — Duke, Gonzaga, Kansas, Michigan State and Wisconsin.  Are any of these schools’ impressive unbroken records the result of some bias over the years by the various Tournament Committees, or did these terms earn their ways to the Tournament in the gym?

For all but one of those teams I find no evidence of bias.  The outlier is, perhaps not surprisingly, Gonzaga, the only mid-major in that group of five, and the darling of college basketball fans for years.  By my reckoning, the Tournament Committee has seeded Gonzaga nearly two (1.9) ranks higher than what my model predicts for any other mid-major team with the same RPI score as Gonzaga over the years.  That means Gonzaga should have averaged an eight seed rather than the 6.2 it was awarded over the years.

I expanded my search for team favoritism to all teams with at least twelve appearances to ensure any measured effect was not just because of small sample sizes.  By that criterion only Cincinnati, with twelve appearances over the seventeen years, joined Gonzaga as a favored team.  Cincinnati, like Gonzaga, received a seeding of 6.2 on average; by my estimates they also should have averaged close to eight.

When I look for favoritism by conference, I find only a slight advantage given to the famous Atlantic Coast Conference, and significantly lower seedings given to teams from three mid-majors, the American Athletic, Colonial, and Western Athletic Conferences.  WAC teams are discounted a full seed, and teams from the other two conferences suffer a disadvantage closer to 1.5 seed ranks.

I included a test for Connecticut (with only eleven appearances it was not included above) and find that it, too, was awarded a bonus of about one full seed point.  Unlike Gonzaga and Cincinnati, though, the Tournament Committee’s confidence in UConn has been demonstrated on the floor.  UConn has won the Tournament three times in this period and averaged 2.91 wins per Conference appearance, behind only North Carolina at 3.07.  Since seeding so strongly determines a team’s overall performance, we might wonder whether Connecticut’s impressive record was the result of favoritism.  The other two advantaged teams, Gonzaga and Cincinnati, have not fared especially well on the floor.  Gonzaga has averaged only 1.41 wins per appearance, and Cincinnati just 0.83.


Do Conference Championships Matter?

All across the nation college basketball teams are participating in conference tournaments.  For the smaller programs, winning the conference tournament is nearly the only way to take part in the “Big Dance,” the NCAA Mens’ Tournament. Most of these conferences receive just a single bid to the tourney, one given to the winner of the conference’s tournament.  The mid-major and major conferences often send multiple teams to the Tournament.  The conference tournament winner receives one bid with one or more others selected “at-large” based on their performance over the course of the season.  Last year, for instance, two teams from the mid-major Missouri Valley Conference went to the Tournament – Northern Iowa, the tournament champion, and Wichita State which received an at-large bid.

“Bracketologists” have debated whether conference tournament victories matter in determining seedings, or whether the Tournament Committee ignores the conference tournament results in favor of each team’s “complete resume” including the regular season.  For instance, the Committee might give little weight to a conference tournament victory by a top team like Kansas which will already be getting a high seed.  Yet conference tournaments, even among the majors, are often not won by the top teams.

The average RPI for major conference champions since 2000 is just 0.63, only a bit better than the average of 0.61 for all major conference Tournament teams. While high RPI major teams get correspondingly high seedings in the Tournament, what about those more middling teams?  Does winning a conference championship improve their seedings in the Big Dance?

To study this, I have updated my models that predict seedings based on RPI and conference membership.  I have included the data for the 2016 Tournament, again excluding the “play-in” teams ranked 65th through 68th.  I use a team’s RPI, its conference membership, and “interaction” terms that allow the effects of RPI to different across the conferences.  To those predictors I add whether the team won a conference championship separated out by type of conference.

The basic results appear fairly similar to earlier models.  Both mid-major and major conference teams are rewarded with better seedings than the remaining teams from smaller conferences with identical RPIs.

The blue line represents teams in conferences that are considered neither mid-majors nor majors.  The line displays the predicted seedings for RPI values observed for these teams since 2000.  A couple of them have RPIs below 0.5, and aren’t represented in the graph, while the highest RPI any of these teams earned was 0.62, where the blue line ends in the graph.

The major and mid-major teams generally get much better seedings at identical RPI levels once we get above 0.56 or so.  Major conference teams also have an edge over the mid-majors that widens as RPI grows.  These results parallel ones I’ve reported on in earlier postings about seeding decisions.

If we add in so-called “dummy” variables for the champions, divided similarly among the three types of conferences, we get this rather startling result:

Winners of major conference championships have an average RPI score of 0.63, while mid-major winners average 0.58.  Without taking into account their championship victories, these teams are predicted to receive seedings of 3.5 and 10.1 respectively.  However if we add the estimated championship bonuses, those seedings improve to a top seed for major champions, and an eight or nine seed for mid-major champions.

Basketball pundits generally do not give much weight to conference championships, but the NCAA Tournament Committee apparently does.


Shot Clock Effects Redux

Last year I posted two items concerning the effects of the change to a thirty-second shot clock in NCAA mens’ college basketball.  I found that total scoring had increased by nearly twelve points per game between the 2014-2015 season and the 2015-2016 season after the shot-clock rule was changed.  However the margin of victory was unaffected.  An equally dramatic effect was seen for three-point shooting.  Teams were hoisting nearly two more three-point shots per game, probably because the shorter clock meant more “desperation” threes were being taken.  However I found no change in the accuracy of three-point shooting after the clock was shortened.

Scoring in the current 2016-2017 season differs hardly at all from last season.  All three measures show insignificant gains compared to last year.

This table extends the results for three-point shooting to include all games played though January 20th of this year.

Three-point attempts have continued to rise in the 2016-2017 season, but we also see an improvement in three-point accuracy.  Teams are shooting three more three-pointers every four games than they did last season, and their accuracy has improved by about half a percentage point.

This change might represent improvements in players’ abilities over time, or a conscious decision by coaches to recruit better three-point shooters out of high schools.  However it may also simply be random fluctuation.  If we go back to the data for 2008-2009, the earliest year available at the NCAA’s site, accuracy was 34.7 percent, hardly different from this year’s figure.  Attempts in 2008-2009 were still significantly lower at 18.9 per game.

Home Field Advantage in NFL Playoffs since 2010

Updated: December 18, 2017

Last year I undertook an analysis of home field advantage in the NFL playoffs but only the wild-card games had been played when I published those results.  I’ve now included all the playoffs in the 2016-2017 NFL season and added some other findings.  The basic conclusions I reached a year ago have remained unchanged.

Overall the home team has won about two-thirds of these games by an average margin of just under six points.

Because the teams are seeded in the playoffs we should expect home teams to outperform their opponents.  The differences across the types of playoff games show the value of these higher seedings.  The margin is smallest in the “wild-card” games, since the teams in those games are more closely matched.  (The top-two teams in each Conference receive a bye in the first-round, so the wild-card games pair the three versus the six seeds, and the four versus the five.)  In the later rounds when the top seeds play, the home team’s advantage is larger, running about seven to eight points compared to three in the wild-card games.

Some of the home field bonus can be attributed to the fact that higher-seeded home teams are stronger, while some may reflect the “home-field advantage.”  For comparison, I calculated the average score for home and away teams for the 41 games played during weeks seven through nine of the 2017 season.  Home teams scored an average of 22.8 points, 2.9 more than their opponents, and won 61 percent of their games.  These figures are quite consistent with the results for wild-card teams presented above.  Seedings play a greater role in the later playoff rounds.

Handicapping the NFL Divisional Playoffs: 2017

Two years back I estimated the point spreads in the four divisional NFL playoff games using a simple model of each team’s average margin or victory over the season based on these factors:

  • net yardage from offensive and defensive plays;
  • net sacks per game; and,
  • net turnovers per game.

I have since added the effects of three other factors to my model for point spreads:

  • net yards gained or lost during kickoff returns;
  • net yards gained or lost during punt returns; and,
  • net yards gained or lost due to penalities;

I have estimated the effects of these factors on each team’s average margin of victory using seasonal data for all 32 NFL teams between 2013 and 2016.

This table presents the results for the remaining eight teams in the playoffs.  The top half of the table shows the net difference between the home and away teams on each of our six factors.  For instance, Houston gained an average of 13.4 more yards per game compared to its opponents; for New England, the comparable figure is just 0.2.  That gives New England a net deficit in yardage of 0.2 – 13.4, or -13.2 as reported below.  The other figures in the top half of the table are similarly calculated.

Using my model, I can estimate the individual effects on the margin of victory (“point spread”) for each of these six factors. For instance, the effect for yardage is approximately 0.08 points per net yard gained, so the Texans’ 13.2 yard advantage on the ground compared to the Patriots is worth about 0.08 X 13.2, or 1.06 points, rounded to 1.0 in the table.  The team with the greatest advantage in terms of yardage is Dallas, which gained on net almost 28 yards more per game against its opponents than did Green Bay.  That difference is worth about 2.2 more points for the Cowboys in tomorrow’s game against the Packers.

When we add up the various effects of each of these six factors, Dallas has the greatest predicted advantage at slightly over three points compared to the Packers.  Next comes New England, whose advantage stems largely from creating more sacks and turnovers than do the Texans.  The Atlanta Falcons hold a slight advantage over the Seattle Seahawks, while the Kansas City Chiefs are predicted to lose to the Pittsburgh Steelers in their game Sunday night.

The last column on the right of the table shows the betting lines in Las Vegas for each game.  These are comparable to my predicted point spreads.  The 16 point spread for the Patriots over the Texans is outrageously high both by historical standards and by my estimates.  Atlanta is also more favored by bettors than the teams’ 2016 performances would justify.  And, despite my model’s prediction that Kansas City should lose to the Steelers in Arrowhead, bettors prefer the home team by a slight margin.

Bracketology 2016: Predicted Seedings

Last year I published a simple model of NCAA Men’s Tournament seedings based on RPI and conference membership.  To recap, higher RPI teams received better seedings, and teams representing major and “mid-major” conferences got better seeds than teams from the other conferences even if they had identical RPI scores.  In principle we should see no differences between conferences once RPI is taken into account because the measure relies heavily of a team’s strength of schedule.  Teams in stronger conferences should have higher RPI scores because they face a more difficult schedule.

In practice, though, the NCAA Selection Committee clearly prefers teams from major and mid-major conferences and fails to give teams from other conferences a fair shake when it comes to seedings as this chart shows:


A team with an RPI of 0.600 from a “single-bid” conference like the Colonial or the Ivy League is predicted to be seeded tenth or eleventh, while schools with identical RPI figures from the mid-major and major conferences would receive a six or a seven seed.  The advantage for both those conferences over single-bid schools grows as RPI increases, as does the advantage for major-conference teams over mid-majors.

We can use the model I estimated that underpins this chart to estimate how teams will be seeded in 2016 based on their current RPI scores.  Using RPI figures from CBS Sports through Sunday, February 21st, gives us the following predictions for the 36 teams that will make up the at-large field in this year’s Tournament:seeding-model-simulationBoth Louisville and SMU are ineligible for Tournament play in 2016, so Seton Hall and Wisconsin have a chance to slip in at the bottom of the rankings.

Effects of the Shot-Clock Change on Three-Point Shooting

The newly-accelerated speed of play in NCAA Men’s College Basketball may have had some side effects other than a simple increase in tempo and higher scores.  The faster pace may make teams change the way the play the game itself.  One place we might see such a change is in three-point shooting.  Teams often resort to hoisting a “desperation three” if their half-court offense has bogged down and the horn on the shot clock is about to sound.

I’ve compiled the statistics for three-point attempts and three-point shooting percentage for the complete 2013-2014 and 2014-2015 seasons from the NCAA’s archive. This season’s figures represent those same data through games of January 25, 2016.  Including 2013-2014 enables us to compare any change this season to “normal” seasonal change before the shot clock was shortened. Here are the results for three-point attempts:


With the shorter clock, teams have been averaging a smidgen over twenty three-point attempts per game this season, about one and a half more than in 2014-2015.  Three-point attempts grew between 2015 and 2014 as well, but the rise in 2016 is some 3.5 times greater than the increase between 2015 and 2014.  Even if we deduct the 0.42 growth in attempts between 2014 and 2015 from this year’s total, that still leaves an additional 1.5 three-point attempts per game since the clock was shortened.  “Desperation” three-point shots probably account for a lot of this growth.


All these extra three-point shots have not affected accuracy. Teams shot 34.3 percent from outside the arc in 2014-15 and are shooting a statistically identical 34.6 percent now.  More striking is the sharp decline from the rate of 36.1 percent in 2013-2014.  While three-point accuracy rebounded slightly this season, it still remains statistically below 2013-2014.





Effects of the Shot-Clock Change in Men’s College Basketball

Most basketball teams play with a “shot clock” that limits the amount of time that either team can spend holding the ball.  In professional men’s basketball the clock runs for 24 seconds.  Both professional and collegiate women use a 30-second clock.

Until this season collegiate men had the luxury of a 35-second clock, considerably longer than that used in the professional ranks to which many of these players aspire.  Now the men have joined their female peers and play on a 30-second clock.  Has the faster pace of play affected scoring and, if so, how?


These figures come from games played during two equivalent weekends in 2016 and 2015.  Most teams were playing conference opponents so the level of competition is roughly the same. This year’s large snowfall in the Mid-Atlantic states produced a few cancellations so the number of games is slightly smaller for 2016.

Scoring has increased nearly twelve points per game this year.  Winners score about 6.5 points more per game in 2016, while losers score an additional 5.0 points. All of these differences are well beyond standard criteria for “statistical significance.”

The margin of victory also grew by 1.5 points, but that difference doesn’t pass statistical muster. There is no statistical evidence that the faster clock has increased the margin of victory.

Reducing the clock from 35 to 30 seconds constitutes a 14 percent reduction (5/35) in time of possession.  Scoring, on the other hand, has increased by only 8.6 percent in response (11.5/133.8).

The shorter shot clock has increased the pace of play as well.  Using the enormous archives of collegiate basketball statistics available to subscribers at Ken Pomeroy’s, I averaged his measures of “tempo” and “efficiency” for the 351 Division I teams in his database.  These figures are based on his estimates of the number of possessions per game using a formula explained here.  I compared the entire season figure for 2015 with those for games played through Sunday, January 24th of 2016.


“Tempo,” the number of possessions per forty-minute game, has increased drastically since 2015, rising well over four per game.  That alone might account for the increase in scoring, but it is not the only factor.  Teams are also scoring about one point more per hundred possessions this year than last.  So not only do teams have more possessions with a shorter clock, the faster pace appears to make those possessions slightly more productive as well.

Obviously this change will wreak havoc on historical comparisons to the 35-second era.  Identically-skilled players in 2016 should be scoring on average about nine percent more compared to the men who played in years past.