Some of our readers may know that I am a fan of a more detailed look at players true production on the court and not just how many points, rebounds or assists he gets. I have been in search of a system that better explains how production on the court relates to team success in a mathematical (and therefore repeatable) manner. Previously I have interviewed David Berri, one of the authors of the book Wages of Wins, who also has attempted to find such a formula to explain what creates wins on a basketball court.
In my searching I ran across a new way of looking at this question albeit more in an individual sense than what Wages of Wins attempts. David Sparks, a self-proclaimed Arbitrarian, is attempting to gather a consensus of opinions on what is important to a game to rank players objectively from different teams. I hope you enjoy this and visit his website and the People's Statistics Project as well.
3SOB: David, can you tell us about yourself?
DS: First of all, thanks for the opportunity to do this interview--it's exciting to get to have a forum like this to discuss ideas and basketball.
Just before my freshman year of high school, I came across some of Bill James' work on baseball, and it showed me for the first time that the conventional statistics were not necessarily sufficient, or even all that accurate in measuring what they claim to be measuring. I was a bit of a sports fan at that time, mostly basketball and baseball, and so I sat down with some box scores and a calculator and began making up stats like "Absolute Bases" (total bases + walks + stolen bases, essentially), and one not too dissimilar from a linear weights system for basketball stats, where I added together weighted points, assists, and rebounds. At the time, being from Houston, I was interested in making sure that Hakeem Olajuwon and Jeff Bagwell were highly valued, but I was trying otherwise to be fairly objective.
In college, I finally learned how to run a regression, and so I threw baseball box score stats in as predictors of runs scored, and it worked almost perfectly, and I was hooked. Since then, I've leaned much more toward basketball analysis, partially because I enjoy basketball more, partially because basketball players all do important things on offense and defense, and partially because basketball seems less fully explored, statistically.
Now that I'm in graduate school, I've taken a couple of actual statistics courses, learned some of the software, and read Edward Tufte. One of the big shifts, for me, has been figuring out that when you do something like take an average, say points per game, you lose a lot of information--you no longer know how those points are distributed, or how consistent the player was, etc. Also, it's interesting to me to see things in more than one dimension: ranked lists are fine, but I prefer to compare and understand players in a multidimensional way, and that's something that using graphics allows one to do.
3SOB: What is an Arbitrarian anyway?
DS: An Arbitrarian is one to whom arbitrariness and subjectivity is unpleasant, even abhorrent (So an Arbitrarian strives to be anything but arbitrary). My whole life, I've always tried to have a good reason for doing the things I do, and thinking the things I think--I have always tried to operationalize more subjective concepts, in order to look at them more objectively, and I'm very interested in measuring things--for example, you might measure how worthwhile an errand is by dividing the time spent in transit by the time spent at your destination. If this ratio is more than one, the trip maybe isn't really worth it. An Arbitrarian tries hard to think through how they will make a decision or analysis, and then sticks with their guns and carries it through, even if the results aren't what they hoped for.
For example, I may still want everyone to think Hakeem Olajuwon is the best player ever, and I could easily design a statistic to say he's the best (one could design a statistic that says Darko Milicic is the best), but when I set out to quantify basketball value I design the metric with at least some sort of theoretical motivation, and whatever comes out when I hit Enter in the spreadsheet is the answer I stick with. A common misconception is that just using numbers makes the analysis thorough and accurate--this just isn't true. Using numbers makes you dangerous, because numbers can say anything you want them to (it's easy to lie with statistics), but they have a certain authority that just making the same claim without numbers doesn't have.
3SOB: Okay, Do you really believe you can devise a statistic that says Darko is the best player in basketball? Surely statistics can’t be massaged that far can they?
DS: Well, Darko would be a little tricky, but I think I could do it: I'd first look at the things at which he is better than average (or, bk), and the things at which he is worse than average (as, three-point shooting, he also takes almost no three pointers). I'd then make a set of linear weightings that put a lot of value on blocks and offensive rebounds, while putting little weight, possibly negative weight, even, on the things at which he is not good. The thing is, there are mathematical ways to arrive at good estimations of the value of each statistic, but basketball is so complicated that they don't apply very well. Folks have done it (I've tried it, myself), but the results usually arrive in a cloud of dust, escorted by a lot of hand-waving. This is why we have to think critically about any statistical approach, because the numbers can be massaged essentially infinitely--but most such metrics won't stand up to a good, critical, look. That said, I am a huge advocate of taking the statistical approach, and really like some of what's out there.
3SOB: You are running the People's Statistics Project. If you are trying to devise a system objectively why are you asking for so many subjective responses to devise this system?
DS: I'm actually working on a separate individual project, where I am trying to come up with my own, quasi-legitimate system--if you haven't seen it, check out this post: http://arbitrarian.wordpress.com/2008/04/25/choosing-the-mvp-geometrically/ where I introduce it. I like the idea of multiplying a player's contribution by team success to estimate value. I'm happy with that, and convinced of it's theoretical validity. The question arises in identifying the appropriate way to estimate value--that is points*x+assists*y-turnovers*z, etc. The x, y, and z determine what type of play is rewarded, and there's a lot of discussion about that. Most of the criticism I've gotten for the Winshares project has been "You can't measure value [especially defense, which, as you may have heard, is half of the game] with statistics [especially box score stats]," the second largest complaint is that I've got my weightings wrong... At first, I didn't include any penalties for missed shots, for example, and I probably overweighted assists, and these sorts of decisions change the outcome. If I could determine a set of weights which were accurate, at least with respect to each other, Winshares would work to my satisfaction, so I'm trying to do that both on my own, and by consulting everyone for their own input.
Based on the reaction to Winshares, I realize that there were about as many opinions as to the appropriate weightings as there are basketball fans, and that there may be some validity to those opinions. I count myself among the least expert basketball analysts, there are literally millions of people out there with opinions as valid, and probably more accurate than mine. Why not ask them? If you have a jar with some unknown number of pennies in it, and ask 1,000 people to guess their number, you would expect the mean of their guesses to be relatively close to the actual number--I'm just doing the same with basketball value.
We ask people "Do you approve or disapprove of how George W. Bush is handling his job as president?" and get a thousand completely subjective responses. However, from this, we construct an approval rating, which has some meaning, and is reasonably comparable across time. The other aspect of the People's Statistic project is that it's a study of the people who are themselves responding--we might be able to interpret their responses to see who they want to be the best, for example, and other things like that.
3SOB: If someone wanted to put their thoughts into your database how could they do it and is there a way to see how a purely subjective system would work?
DS: That's easy: go to http://peoplesstatistic.googlepages.com and click on "Take the Survey" on the left. Answer the short series of questions, and then return to the People's Statistic home page, and click on "View the Leaderboard" to see where all your favorite players rank according to one iteration of the consensus statistic you've helped create.
The way to see how the purely subjective system works is to look at the numbers people are putting in, and try to summarize them somehow. The way it's currently set up, I'm generating a mean of the responses, normalized to the value of points. However, the data is there and available for anyone: one might take the modal response for each statistic, or the median, instead of the mean, and construct a scaling out of that... Then use apply those weights to all sorts of different situations--if you're an expert on the 1986 Celtics, apply the formula to the players on that team, and see if it sorts them according to your own subjective opinion on how they should be sorted, if the sorting comes out all wrong, go back to the People's Statistic page, and retake the survey, entering in what seem to be more accurate weightings. If everyone does this, over and over, we'll refine the weights until they maximize their usefulness, which is all we can ask, I think, of a statistic.
Thanks for the questions! Be sure to participate and tell everyone you know to participate, too!
Good
ReplyDelete... interview.
ReplyDelete