Monday, March 10, 2008

Other Ventures

This post will be little more than an advertisement for some of the other baseball things I have been working on lately. First, I have a new blog, Weekly Scoresheet. This new blog will in no way supercede this one, although it may cause a slight reduction in the number of posts here. It is solely devoted to scorekeeping, and particularly to displaying scoresheets that I have kept over the years (and, eventually, if anyone is interested, some scoresheets from readers of the blog). I intend to post one scoresheet per week, and occasionally there may be other posts that just deal with my thoughts about keeping score.

It is in many ways a self-indulgent site, since I just post my own scoresheets. Why would anyone want to look at those? Fair question, and I don’t expect to have a large readership. Personally, though, I enjoy looking at other people’s scoresheets and think there is a dearth of information on scorekeeping on the web. If someone else had a similar site, I would check it out from time to time. So whether anyone reads it or not, I will enjoy writing it.

Secondly, about a month ago I contributed a number of pages to Tango Tiger’s Sabermetrics Wiki. I have not contributed much lately, but eventually I’d like to add some more. Hopefully, though, that won’t be necessary, as other people add their work.

I am sure that I do not have many readers who do not also read Inside the Book, and thus are not familiar with the wiki project. But on the off chance that there are those of you out there who fit into that category, I wanted to do another post touting the effort. There are a number of pre-existing pages that need more explanation (particularly, may I suggest the pages on park factors, Win Shares, DIPS (does not exist, although there is a page on BABIP), and fielding metrics?

I am going to reproduce the page on linear weights here, since I wrote most of it, and it is heckuva lot better than the LW article on my website, which is embarrassingly bad (I keep intending to replace it, but never actually sit down and do it). As you can see, while this article is longer than some of the others I suggested needed work, it still only scratches the surface of what could be discussed about linear weights, focusing on how the weights are derived. So don’t feel intimidated by the fact that a page has already been started.

Linear Weights

Linear Weights (LW) is a term used broadly to refer to any linear run estimator, and also to the analytical system of Pete Palmer (see Linear Weights System). The pioneer of Linear Weights was Canadian sabermetrician George Lindsey, but the concept was expanded upon and popularized with Palmer's Batting Runs.

Methods for Generating Linear Weights

EMPIRICAL APPROACH

The empirical approach to Linear Weights is closely related to the concept of Run Expectancy. To generate the weights, some sample of data (often all plays in a given league-year, or over the course of several years) is analyzed. The change in run expectancy on each play is calculated as follows:

Change in RE = Final RE - Initial RE + Runs Scored on play

For example, take the case of a grand slam with 2 outs, using this RE Table. The initial RE is for the bases loaded, 2 out state (.815 runs). The final RE is for the new state, which is bases empty, 2 outs (.117 runs). Four runs scored on the play, and thus the value of the play was .117 - .815 + 4 = 3.302 runs.

After doing this process for each play, the results are averaged to produce the Linear Weight values. This procedure will result in out values that will estimate runs above average (or in other words, the sum of the product of the coefficients and the frequencies of each event will be zero). In order to estimate the total number of runs scored, to the coefficients of events which include outs (i.e. a strikeout, caught stealing, double play, etc.) must be added 1/3 of the expected run total for the inning (equivalent to the bases empty, no outs run expectancy) for each out on the play.

INTRINSIC WEIGHTS BASED ON DYNAMIC ESTIMATORS

Dynamic run estimators differ from linear run estimators in that they do not place a fixed coefficient on each event, but rather attempt to model the run scoring process. Thus, the value of each event varies based on the frequency of other events.

However, for any given set of input statistics, the intrinsic linear weight that the dynamic estimator places on a given event can be determined. If one trusts that the dynamic estimator being used is a good model, then the linear weights it generates for the inputs could be valuable.

Various approaches can be used to determine the intrinsic weights. The so-called "+1 method" adds one of a given event (i.e. one walk or one double). The difference between the output of the estimator with the additional event minus the output without it is the linear weight for that event. More precise estimates can be generated by adding smaller increments (for example 1/100th of a walk), finding the change in estimated runs scored, and dividing by the size of the increment added. The smaller the increment, the more accurate the estimate because each change in the inputs changes the system ever so slightly.

For dynamic estimators that can be written as simple formulas, the formula for the intrinsic weights can be found by partially differentiating the equation with respect to each event. The partial derivative is a calculus concept which finds the change that would be created by adding an infinitesimal amount, eliminating the effect of changing the system.

The intrinsic weights found through the Base Runs estimator, as well as those from Markov models of run scoring, are the ones that are most often used by sabermetricians, since those models work over a wider range of contexts than other dynamic estimators like Runs Created.

MULTIPLE LINEAR REGRESSION

Linear weights are sometimes generated by running multiple linear regressions to predict runs from the various offensive inputs. This is usually done on team seasonal data, although it could be done on game or inning level data too.

The drawback of regression is that it is a purely mathematical procedure, and the results do not always conform to what logic or other means (such as empirical linear weights) tell us to be true about baseball. The correlation between an event and runs sometimes does not reflect the impact that it has upon runs. For example, take this regression on team season data from 1954-1999 found in Jim Albert and Jay Bennett's Curve Ball:

R/G = (.49S + .61D + 1.14T + 1.50HR + .33W + .14SB + .73SF)/G

A double is only seen to contribute .61 runs, well below the .8 usually found through other procedures. Additionally, sacrifice flies are valued at .73 runs. This result is not surprising when one considers that sacrifice flies always result in runs. However, as observers we know that while the sacrifice fly contributes to the run, the more important element was the events that allowed a runner to reach third base with less than two outs. Albert and Bennett explain that sacrifice flies are a "carrier" category, meaning "[They] may carry more information than their literal name implies."

The choice of categories in a regression often affect the coefficients as well. It is not unusual for a regression using Total Bases and Hits to give coefficients for hit types more in line with our expectations than a regression using singles, doubles, triples, and home runs as separate inputs.

SIMPLE MODELS

In lieu of play-by-play data or using intrinsic weights, several methods for producing approximate linear weights for different contexts have been created. These approaches rely on assumptions about the relationships between the value of offensive events that are fairly valid within the normal range of team contexts. While they may not work well when applied to theoretically extreme teams (for example, nine Babe Ruths), they can be used to generate reasonable weights for normal teams and leagues.

Both David Smyth and Tangotiger have published these types of models. Smyth's begins with the premise that each on base event is worth the average number of runs scored per baserunner (approximated as (R - HR)/(H + W - HR)), and proceeds to use various assumptions to estimate each event's value in terms of advancing baserunners. Combining these two values gives an overall coefficient for each event.

SKELETONS AND TRIAL AND ERROR

Skeletons refer to an equation that is crafted based on relative weighting of offensive events, which is then multiplied by a constant in order to estimate runs. An example of an estimator developed by a skeleton approach is Paul Johnson's Estimated Runs Produced. Johnson used play-by-play data to determine the average number of bases gained on hits and walks, then experimented to find a value for outs and found a constant (.16) which would bring his equation in line with runs scored.

In the case of ERP, the logic used to create the skeleton was similar to that of the Run Expectancy approaches described above, since both relied on examination of play-by-play data. However, Jim Furtado's approach in developing Extrapolated Runs was a hybrid. Using ERP as a starting point, he also considered regression results and experimented until he found a formula that he felt made common sense and had superior accuracy when applied to his sample data.

This family of techniques is often criticized because they are seen as forsaking the theoretical soundness of empirical approaches in pursuit of more accurate predictions with sample data.

Examples of Linear Weight Estimators

Below is a list of linear run estimators commonly used by sabermetricians. However, it should be noted that linear weight methods often do not have unique names as they are tailored to a specific environment or context. The methods below use long-term average values and are generally not designed for any specific context. (lists Batting Runs, Estimated Runs Produced, and Extrapolated Runs).

No comments:

Post a Comment

I reserve the right to reject any comment for any reason.