Tuesday, May 02, 2006

"Statistical Analysis" v. "Sabermetrics"

In the last week, there was a lively discussion on the mailing list of the SABR Statistical Analysis Committee on a proposal to change the name of the committee to the Sabermetric Committee or something similar. Some of the debate has been based on how the term “sabermetrics” is viewed by other members of the organization, and other internal organizational politics-type stuff. I know little about SABR politics and care little about SABR politics, so I will ignore that element of it.

As to which is the better name for the committee, though, I believe 100% that it is “sabermetrics”. One objection has been that there is no readily agreed upon definition of sabermetrics, and that the definitions often employed, especially by those who are not themselves sabermetricians, are essentially equivalent to “statistical analysis.” This is true, although I don’t believe that a misunderstanding of what sabermetrics is by the uninformed should change the way the term is used by those who understand it.

Bill James had two explicitly stated definitions for sabermetrics that he published in the Abstracts. The first was “the mathematical and statistical analysis of baseball records”. Oops. But Bill quickly realized that this was not adequate, so he used “the search for objective knowledge about baseball”.

That definition is too broad in my opinion, as there are many things that are objectively knowable that do not really fall under the purview of sabermetrics. For example, a list of owners of the Red Sox is objective knowledge about baseball, but under this broad definition practically all baseball research is sabermetrics.

Personally, I think that the best single definition of sabermetrics was that presented by Craig Wright in his forward to the 1985 Abstract, which was “the scientific research of the available evidence, to identify, study, and measure forces in professional baseball.” My only quibble here is that I believe the “professional” qualifier is unnecessary. I especially like the use of the word “measure”--James' word, if not his definition, already captured this by using the suffix “-metrics”. It measurement that sets a list of Red Sox owners, or Cal McLish’s full name, and other such things outside of the realm of sabermetrics and into the other categories of baseball research.

I also believe that the “objective knowledge” part of the James definition is best left by the wayside. Much of the statistical data that we work with is not purely objective. Take the distinction between a hit and an error, between a wild pitch and a passed ball, etc. These are subjectively determined, although there is an established framework by which one is supposed to make the distinction. However, scouting reports could also be incorporated in some way into evaluation, if the biases and error ranges are considered--just as we should for our statistical methods. That is not to say that all information is equally valid or equally useful, but we should not throw it all out the window right off the bat.

Wright also included some clarifications about the properties of sabermetrics which are very instructive in this particular debate: “A sabermetrician is not a statistician. Sabermetricians do not study baseball statistics. Sabermetricians are actually involved in research, scientific study, and the object is baseball.” (emphasis in original)

Statistical analysis and sabermetrics are not synonymous. Much of the statistical analysis that is done can certainly be put under the umbrella of sabermetrics--but it is a subset of sabermetrics, not the definition of it. I have always been uncomfortable with the term “statistical analysis” for other reasons--it implies a rigorousness and an approach that is not inherent to sabermetrics.

Statistics is a science unto itself, with its’ own principles and its’ own approach to questions. If a statistician is presented with the problem of estimating team runs from component statistics, his first move is not going to be to sit down and try to build a theoretical model of how runs are scored. He will likely look at correlations between the events, run regressions, etc. I am not here to say that those approaches are bad, just that sabermetrics encompasses more then that. Statistical tools are not going to allow you to create a method like Base Runs. If you created the model though, they will help you validate it.

At the risk of repeating myself, statistical analysis is certainly an important part of sabermetrics, but it is a poor choice of term to describe everything that sabermetricians do. And therefore, I don’t see why the committee should carry that name.

4 comments:

  1. I figured that description might be questioned by some, but when I look at all of the sabermetric methods that involve logical models, like RC or BsR or Pythagorean, they were all made by "baseball thinkers" like Bill James or David Smyth, not someone with a formal statistics background.

    So often when you see a formal statistician write a sabermetric piece, they jump in with a bunch of regressions and correlations and the like, and attempt to fit to a range of normal Major League data.

    Basically, what I am trying to convey comes down to this: You don't need a statistician to develop Base Runs for you. You need a statistician to come in after you've developed BsR, and test it and validate it.

    ReplyDelete
  2. On page 15 of "The Diamond Appraised", Craig Wright quotes Bill James' definition of sabermetrics:

    Sabermetrics is the field of knowledge which is drawn from attempts to figure out whether or not those things people say [about baseball] are true.

    Seems like a good definition to me.

    Larry (larrypmac@sympatico.ca)

    ReplyDelete
  3. I don't think there is a need to change the name from sabermetrics, but I thought I should help describe what a statistician does. Much of my work is in the development of suitable models for data. Many of these are probability models, so we can explicitly describe the extent of chance variation. For example, Jay Bennett and I illustrate (in Curve Ball) how one can obtain a good measure of hitting performance without understanding baseball.

    I would be careful to categorize statisticians as people who do one type of analysis. There are many tasks of a statistical analysis including data collection, data analysis through graphs and summaries, model building, and prediction.

    ReplyDelete
  4. Phil Birnbaum, discussing The Wages of Wins(which I myself have not read) in the newest BTN, gets at what I am talking about (although the book is written by economists, not statisticians...I realize that the lines between these various fields get fuzzy, depending on what one's speciality is):

    There is a common pattern the authors use throughout the book – run a regression, explain the findings, assume the problem is now solved, then dismiss conventional wisdom because it doesn’t use regression. On page 7, the authors express dismay at the “laugh test” – the tendency to dismiss analytical findings if they contradict conventional wisdom -- correctly pointing out that research trumps intuition. But the authors go too far the other way. With equal lack of justification, they unthinkingly reject any non-statistical opinion that contradicts their results. Which I think is why they’re off the mark so often: they fail to consider that their analysis may be incomplete, or may not completely capture what they’re trying to measure. They give opposing views no benefit of the doubt, and their own views get no doubt at all.

    This is the vibe I get from many(not all, not most) real statisticians that dabble in baseball research. And that's not to say that "regular" sabermetricians are not self-critical enough...just about everybody is probably guilty of that to some extent. But the regression part of Phil's comments really address what I was trying to say here.

    ReplyDelete

I reserve the right to reject any comment for any reason.