Sunday, December 29, 2013

Battle of the Sexes: Free Throw Edition

As a UNC basketball fan, this season has definitely been a roller coaster ride.  UNC is 3-0 against top 25 teams, beating #1 Michigan State on the road, #3 Louisville on a neutral court, and #11 Kentucky at home.  UNC has also had some bad loses: UAB, Belmont, and Texas.  Two of the losses were by 3 points, and UNC missed over 20 free throws in both of those games.  Last weekend, I attended the Toledo-Dayton women's basketball game, and there were hardly any missed free throws.  So this got me thinking: are women any better than men at free throw shooting?

When looking at all Division I free throw percentages for both men and women (as of December 27), men are slightly better at making free throws, as shown below. The median free throw percentages (thick black lines) are 69.1% for men and 68.4% for women.  The variability is also much smaller for the men than women.  Note that the UNC men make 61.3% of their free throws, ranking them 333 out of all 345 teams.
Next, I wanted to look the the free throw percentages of the top 25 ranked teams, which is shown below.  For easier comparison, I have also included the distribution for all Division I teams.
For both men and women, teams ranked in the top 25 are on average better at making free throws than compared to all teams.  When restricted to top 25 teams, women have a better free throw percentage than men.

So who is better at making free throws: men or women?  Men are slightly better on average than women, but when restricted to the best 25 teams, women are on average better.  Regardless, I'm hoping that UNC can increase their free throw percentage in the second half of this season.

Thursday, October 24, 2013

Coming soon: More NBA data than you can imagine

It was recently announced that the NBA is partnering with a company that will record and release data for all of the games (similar to all of the MLB data that is currently available, such as pitch placement).  You can read the full article here.

The article mentions: "For the first time, all 29 of the NBA's arenas will have software-packed cameras that will record players' every move, mapping 25 images per second."  This is a ton of data, so I am sure that they will not be releasing all of the raw image/mapping data, but some summarized version, such as where on the court each shot came from.  Here are a few statistics that I would love to see made available:

  • Number of times a ball is dribbled per game.
  • Number of times a player travels but no call is made.  I'm thinking about fast breaks where players take 4 steps without a dribble.  It would also be cool to see this breakdown by player/team.
  • The arc of each free throw shot.  For a given player (this will vary a lot between players), how consistent are their arcs, and how well can you predict if a free throw is made or missed based on its arc?
  • The distance that each player runs during a game.  
Leave a comment if you have any other ideas.

Friday, October 11, 2013

Getting a shoutout from Sports Illustrated

After writing my last post arguing that women should not blindly challenge calls more often in professional tennis, I sent a quick summary to Jon Wertheim at Sports Illustrated.  It sounds like he appreciated my side of the argument, as he published part of my response in his weekly article.  Pretty cool!

Sunday, September 15, 2013

Why women should not challenge more in tennis

In last week's Sports Illustrated, Jon Wertheim argues that women should use the challenge system more at Grand Slam tennis tournaments.  Although he includes data in his article to make his case, I'm going to argue that he is misinterpreting the data and show that convincing women to challenge more often might actually hurt the game.  Unfortunately, this article is not available online, so I will begin with a summary and include some direct quotes (*Update - the article is available here).

A little background: "In 2006 tennis instituted a replay challenge system not unlike the NFL's.  Provided the court is equipped with the technology, players can appeal line calls for review."  The rules are the same for men and women; however, men and women have different challenge behavior.  "Men challenge their points 25% more often than women - though their success rates are virtually the same."  For example, at Wimbledon this year, women challenged 2.6% of the points played, while men challenged 3.3% of the time. This trend holds over all Grand Slam tournaments over the past year.  Men won their challenges 27.73% of the time while women won 27.37% of challenges.

Supplied with this data, the author concludes that "men are more prone to question their authority" and "women are more reluctant to challenge and be assertive or confrontational".  Many assume that men are more likely to challenge because they hit the ball harder, so linespersons are more likely to make a mistake.  The author's response: "Sounds logical.  But if this were true - if it were harder for linespersons to trace 140-mph serves, as opposed to 120-mph serves - we would expect to see a disparity in accuracy of line calls".  But because the challenge accuracy is equivalent between gender, the author concludes that women challenge less (and thus accept incorrect calls) because "women are uncomfortable with confrontation and negotiation".  The article concludes with a quote from Martina Navratilova, "Women need to be more comfortable challenging.  Here's one area where there's no reason we shouldn't be like the men."

The author's whole argument hinges on one point: there is no disparity in the accuracy of line calls for men's and women's matches.  I agree that the author's conclusions would hold if the data could show this.  However, we have no way of knowing the accuracy of all linepersons calls from the data - we only know that the accuracy of player challenges is roughly equivalent between genders (27%).  What we cannot know is how many of the unchallenged points contained an incorrect call by a linesperson.  The only way we could figure this out is if someone watching the match (presumably in the TV booth) "challenges" every point to determine how many points contained incorrect calls, then look at how many of these points were challenged by players.  This would allow us to construct the following table for each gender:

# Points containing
all correct line calls
# Points containing
an incorrect call
# Points
Challenged
A
B
# Points With
No Challenge
C
D

From the data presented, we know the values for A and B.  With the Wimbledon 2013 men's data, A = 2.38% of all men's points (3.3% of challenged points x 72.27% of incorrect chalenges) and B = 0.92% of all men's points (3.3% of challenged points x 27.73% of overturned points).  For women, A =  1.89% and B = 0.71% of women's points.

To determine whether or not linespersons make more incorrect calls in men's matches, we need to know D (the total number of incorrect calls is B + D).  Without this knowledge, we cannot determine if more challenges from women would result in more overturned calls.  However, we know that the value of B is larger for men than women by 0.21% of all points played.  For the author's assertion that linespersons are equally as likely to be incorrect for women as for me, D would need to be larger for women than men by 0.21% of all points played.

Let's look at 3 examples using the data from Wimbledon 2013:

  1. Assume women got every single wrong call overturned (D = 0).  Therefore, using additional challenges will not get any call overturned.  If women challenge every as frequently as men without any additional incorrect calls, then women's accuracy would drop to 21.56%.  So urging women to challenge more would make them appear to have worse judgement than men (not a good thing when trying to argue for gender equality).
  2. Assume the proportion of incorrect men's and women's calls are equivalent (D is equal for men and women).  If this is the case, then you could argue that men and women should both be challenging more.  However, because B+D is still smaller for women than men, their accuracy would decrease if they challenged the same number of times as men.
  3. Assume the total number of points containing an incorrect call is equivalent between genders (D is larger by 0.21% of all points for women than men, resulting in B+D being equivalent between men and women).  If this is true, as the author assumes, then his conclusion than women need to challenge more is correct.  However, assuming that women challenge the same number of times as men, their accuracy on the "new" challenges would actually increase from 27.37% to 29.08% if they were to remain as successful as men at getting line calls overturned.  This means that women are currently challenging calls that are less likely to be overturned than the calls that they are are missing (try explaining this to the women players!).

Therefore, we cannot make any assumptions as to whether linespersons miss the same number of calls for women as men.  This also means that we cannot determine whether women can challenge more and still be as accurate as men in getting calls overturned.  So what conclusions can be made from this data?

  • Men and women are equally successful at challenging.  This means that one sex does not have better "eyes" than the other.
  • Men and women are equally bad at challenging.  On average, less than 1 out of 3 challenges will result in a call being overturned.  This number is likely due to "throw away" challenges when a player knows the call was correct but has challenges to waste at the end of a set (or needs a longer break to catch his/her breath after a long point).
  • Assuming that men and women are getting every incorrect call overturned (D=0), linespersons are 29.6% more likely to make a mistake on a men's point than a women's point (0.92% vs 0.71%).

Sunday, August 4, 2013

Analyzing Pro Athletes' Physiological Dashboard

I recently came across an article about the "sports science" changes that Chip Kelly has implemented since becoming head coach of the Philadelphia Eagles.  Basically, the Eagles spent more than $1 million investing in new technology that measures physiological details (heart rate, amount of time spent running during practice, 3d views of how players are lifting weights, etc) in the hopes of creating a "physiological dashboard" for each player.  They want to monitor the performance of each player during practice to increase training efficiency, such as ending practice early for players reaching their endurance limits or ensuring that players receive the correct amount of hydration based on what was lost during practice.  A large portion of the article is dedicated to describing the Eagles sports-science coordinator, who has previously served as a strength coach and nutritionist for colleges and the Navy SEALs.

Here are some interesting quotes:

  • "The result is a data driven approach to training"
  • "Players can log into their personal computers to check their own fitness profiles"
  • "Last season Catapult helped on of its NFL clients compare practice data ... in weeks when the team won compared to those when it lost.  A trend emerged: during Thursday practices before losses, offensive skill players were running a lot but not very quickly."
OK, so NFL teams are beginning to collect all of this data about their players.  But who exactly is mining all of this data to find useful information?  I can't believe that its the sports-science coordinator (he doesn't have a statistics degree).  Plus, who can actually monitor and interpret all of this data in real-time (i.e. during practice)?  It seems that Catapult, an IT consulting company focused on interpreting data, is doing some work after the season is over, but do any of these teams have the capacity to perform analysis in-house?  Here are a few things to think about:
  1. I'm sure most of the companies selling the equipment have guidelines or suggestions for how to interpret the data.  So maybe a bell goes off when a player's heart rate gets too high.  But how accurate are these baselines, especially when the same guidelines are applied to 180lb running backs and 350lb linemen?
  2. What is the goal of collecting all of this data?  Making real-time decisions about players' health during practice?  Drawing team-wide conclusions about what does/doesn't work at the end of the season?  These are 2 very different questions that could influence the most effective way to collect data.
  3. How much are teams investing into analyzing this data (either in-house or through ouside companies)?  For current genomic sequencing projects, more money is spent on the analysis than the sequencing experiment itself.  So are the Eagles planning to spend an additional $1 million on interpreting all of this data?  Or will this data just go to waste?

Thursday, August 1, 2013

College Basketball Commitments

I came across a very nice article describing the college commitment habits of 700 top basketball recruits.  The author does a great job of delving through the data and concisely summarizing the main findings.  A few of my favorite highlights:
  1. First, just obtaining all of this data (all high schools and colleges attended for all 700 athletes) must have been a Herculean task.  
  2. I like that he also displays the data with several bar charts and a very colorful cumulative density plot showing how early in their high school career that recruits commit to a college.
  3. Of the players who spent at least 2 seasons playing in college, over a third didn't end up where they started.
  4. Think that these top recruits only bounce around universities to get the most exposure?  4 of the recruits attended 6 different high schools, including current NBA player Michael Beasley.  Plus, over 50% of the 2013 recruiting class attended at least 2 high schools.

Friday, March 15, 2013

Statistics Playing Major Role in College Football Playoffs


In the above article, Sports Illustrated sought the recommendations of 5 college football and basketball "stats gurus" to get a better feel for how the college football playoff committee should go about choosing the four teams to compete in the 2014 national championship playoffs.  They discussed three primary themes:


1. The need for accountability and transparency. Although the BCS releases their rankings and scoring/point totals every week, the actual formula used in these calculations is proprietary.  I am in agreement with the 5 experts in calling for full transparency in the system.  However, this makes it difficult to include an "eye test" in the decision (whether this should be included is another debate).  My favorite quote:
"I doubt this will happen, but I think they need to have a non-voting data person in the room as well. Someone to help the members interpret ratings and other data sources, answer questions that are posed and hold the group accountable to information that is shared."
2.  Its about more than wins and losses.  Should other factors like injuries and margin of victory/defeat play into account?
"Of course, the danger of using advanced stats or ignoring head-to-head results is the committee might wind up producing a bracket that the majority of the public -- accustomed to seeing rankings ordered largely by team records -- rejects."
3.  Strength of schedule isn't what it seems.
"There are many ways to measure schedule strength, and many of them are valid. I like to use this example. Imagine two schedules. Schedule A consists of the six best teams in the country and the six worst. Schedule B consists of the 12 most average teams in the country. Which is tougher? Ask Alabama, and they'll obviously say Schedule A. Alabama would have a much easier time running the table against Schedule B. But ask the worst team in the country which one is easier, and they'll say the opposite. The worst team in the country would have a hell of a time winning a single game against Schedule B. ... So depending on who you are, you can perceive the exact same schedule of teams very differently."

Saturday, February 2, 2013

Super Bowl Squares Strategy

With the Super Bowl just a day away, I am hearing a lot of talk about Super Bowl Squares, the game of chance that only gets played one day out of the year.  With most variations of the game, people sign up for squares, then once all squares have been taken, the numbers are randomly assigned to the rows and columns, thus making this purely a game of chance (I guess the football game also plays a role too).

But suppose that these numbers were not randomly assigned: you get to choose the numbers that you want.  Which pair of numbers gives you the best chance of winning?  I have seen a few articles online trying to answer this question, but all the ones that I have come across look at the score after each quarter of all previous Super Bowl games.  While I see the point of only looking at Super Bowls, some of these games were played over 40 years ago and the game has clearly evolved since then.  For example, I have to believe that field goals are much more common now then they were 40 years ago, as kickers are now able to routinely make 50+ yard field goals (I don't have data to back this up, so let me know if I'm wrong).  Therefore, I have decided to look at all football games from this past season, including the playoffs.  If my counting is correct, this covers 266 games.  I should probably look at the score after each quarter of every game, but this would cover 1064 quarters, and I just don't have the time (or really care to) do this.  So I have decided to only analyze the final scores of the 266 games.  I also ignored whether the winning team was home or away, so to me, Team A winning by a score of 17-13 (making square 7,3 the winner) is equivalent to Team A losing 13-17.  That is, I treated squares (7,3) and (3,7) as the same.

Let's first look at the most common point totals, with respect to the last digit.  As expected the least likely point totals end in 5 (3.8% of all final scores) , 2 (4.3%) and 9 (5.1%).  The most common point totals end in 3 (16.4%), 4 (16.0%), 7 (14.8%), and 0 (13.5%).

Now let's look at pairs of numbers.  If you played over the full 2012 season, 3 squares would have never won (when only looking at final scores): (1,2), (2,9) and (5,6).  This isn't too surprising because, as shown earlier, it is difficult to score total points ending in 2, 5 or 9.  The most likely pairs this past season were (3,6) and (3,7)*, which each occurred 16 times this season.  Combined, these 2 pairs would have won over 12% of the games.  Additional pairs that would have won over 10 times this past season include (0,3), (0,4), (0,7), (0,8), (1,4) and (3,4).

In conclusion, if numbers were not randomly assigned in Super Bowl Squares, it would easily be possible to win in the long run.

* SI writer Peter King picked the Ravens to beat the 49ers 27-23, so he's playing the odds with his final score prediction.

UPDATE (2/4/2013). The score after each quarter (with the Ravens always leading) was 7-3, 21-6, 28-23 and 34-31.  This means that the winning squares were (3,7), (1,6), (3,8) and (1,4).  Did anyone follow my advice and bet on (3,7) or (1,4)?

Sunday, January 27, 2013

MLB Hall of Fame Voting

Here's a link to a really cool interactive graphic of MLB hall of fame voting.  As with most datasets, there are many different variables that are useful to display visually.  Including interactive graphics is a nice way to show multiple variables (or to select only a subset of variables) without making a million different 2-d plots.  Below the graphics, the authors describe all of the features of this plot - I suggest that you read through it and try some of these things out.


The New York Times has begun to show similar visualizations for economic and political issues.  As we move away from print articles and towards online reading, I think we will see a rise in popularity of these types of interactive graphics.  

I'd love to learn how to make these type of graphics, but most of the heavy-lifting is done in Java (which I have no experience with), and I don't have enough free time :(

Monday, January 21, 2013

College Football 2012 Wrap-up

This BCS bowl season, two teams, Wisconsin and Northern Illinois, were coached by interim coaches in their BCS bowl game after their head coach left the team to accept a new position.  There seems to be an increasing trend of coaches leaving their teams before a bowl game to accept a new coaching position.  I began wondering whether schools that are searching for a new head coach should try to scoop other coaches before the bowl games are completed, or if they should factor in the bowl game performance (maybe make the candidates feel some extra pressure to win)?  Schools tend to want to fill their coaching vacancies ASAP because this provides the new coach an extra month to put together his coaching staff and recruit.  But does this process of hiring coaches before the bowl game actually lead to better football success?

I chose to look at head coaches who lead their team to one of the BCS bowl games, then accepted a new college coaching position the following year.  While this leaves a small sample size (n=9), it is easier to evaluate the performance of these coaches because it is assumed that the new school is expecting the new coach to lead his new team to BCS bowls.  Here is a summary of the 9 coaches:

CoachPrevious TeamOld BCS Record*YearLast BowlNew TeamRecordNew BSC Record
Steve Spurrier
Florida2-1
2001
W
South Carolina
66-37
0-0
Urban Meyer
Utah
1-0
2004
W
Florida65-15**
3-0
Walt Harris
Pitt
0-1
2004
L
Stanford
6-17**
0-0
Rich Rodriguez
West Virginia
1-0
2007
W*
Michigan
15-22**
0-0
June Jones
Hawaii0-12007
L
SMU
31-34
0-0
Brian Kelly
Cincinnati
0-1
2009
L*
Notre Dame
28-11
0-1
Randy Edsall
UConn
0-1
2010
L
Maryland
6-18
0-0
Bret Bielema
Wisconsin
0-2
2012
L*
Arkansas
-
-
Dave Doeren
Northern Ill.
0-0
2012
L*
NC State
-
-

* = Coach left team before BCS bowl game, so it was coached by interim coach. If the head coach left the school before the BCS bowl game, it is not reflected in his BCS record.
** = No longer with this team. Meyer retired and Harris and Rodriquez were fired. 

A few interesting gems from looking at this table:
  • Only 3 of these 9 coaches won a BCS bowl game with their previous team (Spurrier, Meyer, Rodriguez).  
  • Only 2 have taken their new teams to a BCS bowl game (Meyer, Kelly), with Meyer being the only coach to win a game (actually 3, including 2 national championships).
  • The only coach to win a BCS bowl game with their new team (Meyer) had won a BCS bowl game with his previous team.
  • 3 of the 4 teams coached by intermin coaches lost their bowl game, with West Virginia being the only exception.
Yes, programs that are hiring coaches have probably suffered some losing seasons and need time to rebuild, so these results could change in another year or 2.  Plus, this is a small sample size, so we would probably be better off by including all coaches who leave their teams, not just ones leaving after reaching a BCS bowl game.  In my opinion, schools that are hiring college football coaches are placing too much emphasis on reaching BCS bowl games and not enough on winning these games. Even if it is all about the money of BCS bowl games and not actually about winning, most of these big-name hires are struggling to take their new teams to a BCS bowl game.

If I were in charge of hiring a new football coach to turn around a struggling program and win national championships, here would be my one major piece of advice:
If you are serious about winning national championships, hire a coach that has actually won a BCS bowl game. If none of these coaches are available/interested, then don't settle for a coach who has taken his team to a BCS bowl game but lost - what makes you think he can do better next time (ahem, Brian Kelly)?  Save your money and take a chance by hiring a coach who hasn't been to a BCS bowl (but has preferably won other bowl games). You might just hire the next Les Miles (2-1 in BCS bowl games since 2005, including a national title).

Thursday, January 3, 2013

Scheduleball


By now, everyone is aware of the impact of Moneyball (statistics!) on the MLB.  Due to the media firestorm surrounding Moneyball (aided by the catchy name), the perception seems to be that baseball GMs are the brainiest employees in professional sports.  Occasionally, I'll hear a story about an NFL or NBA GM who breaks the mold by applying analytics to improve their team's performance, but this isn't very common (although I imagine all pro teams are now employing at least a few data analysts).  

But I've NEVER heard a story about a college athletic director credited for improving/influencing his school's performance beyond the hiring of a high-profile coach ... until I read the above story.  I imagine that the main reason for this is because it is extremely rare.  You only hear about from the AD when its time to hire/fire a coach, respond to NCAA investigations, or build a new state-of-the-art athletic facility. This really doesn't make much sense, especially considering that all of the top universities have many great PhD-level statisticians on their payrolls.  While not all statisticians do sports-related research, all a university needs to do is simply buy out one of a professor's courses to get his/her expertise on how to apply a Moneyball-style approach to give their athletic teams as many advantages as possible ("buying out a course" = allow a professor to dedicate the time he/she would normally spend teaching a course to do some other research activity).  

I realize that this may ruffle some feathers, as many head coaches don't want to take advice from some academic wizard.  One option is to hire an AD who can do these sorts of analyses himself, and only try to change "off the field" decisions, such as scheduling, which is what the linked story above explains.  And as probably all coaches have bonuses built into their contracts for post-season appearances, what coach wouldn't want to do all he can to increase his likelihood that his team makes it into the playoffs (or bowl game)?  

If any university wants to take a chance and hire a statistician as their next AD to get an advantage over their competition, I'll be more than happy to interview :)

Wednesday, January 2, 2013

NFL Pop Quiz

As a new resident of St. Louis, I've enjoyed having a local NFL team to cheer for (although maybe not for much longer if they move to LA).  Rookie punter Greg Zuerlein had some incredible special plays this year and completed 3 of 3 pass attempts for 42 yards and 1 touchdown.  Can you guess which high-profile (and highly paid) quarterback threw for fewer yards?  Find out here.