In my first series about runs per game distributions, I wrote about how to use estimates of the probability of scoring k runs (however these probabilities were estimated, Enby distribution or an alternative approach) to estimate a team’s winning percentage. I’m going to circle back to that here, and most of the content is a repeat of the earlier post.

However, I think this is an important enough topic to rehash. In fact, a winning percentage estimator strikes me as the most logical application for a runs per game distribution, albeit one that is not particularly helpful to everyday sabermetric practice. After all, multiple formulas to estimate W% as a function of runs scored and runs allowed have been developed, and most of them work quite well when working with normal major league teams--well enough to make it difficult to imagine that there is any appreciable gain in accuracy to be had. Better yet, these W% estimators are fairly simple--even the most complex versions in common use, Pythagenport/pat, can be quickly tapped out on a calculator in about thirty seconds.

Given that there are powerful, relatively simple W% models already in use, why even bother to examine a model based on the estimated scoring distribution? There are three obvious reasons that come to my mind. The first is that such a model serves as a check on the others. Depending on how much confidence one has in the underlying run distribution model, it is possible that the resulting W% estimator will produce a batter estimate, at least at the extremes. We know of course that some of the easier models don’t hold up well in extreme situations--linear estimators will return negative or greater than one figures at some point, and fixed Pythagorean exponents will fray at some point. While we know that Pythagenpat works at the known point of 1 RPG and appears to work well at other extreme values, it doesn’t hurt to have another way of estimating W% in those extremes to see if Pythagenpat is corroborated, or whether the models disagree. This can also serve as a check on Enby--if the results vary too much from what we expect, it may imply that Enby does not hold up well at extremes itself.

A second reason is that it’s plain fun if you like esoteric sabermetrics (and if you’re reading this blog, it’s a good bet that you do). I’ve never needed an excuse to mess around with alternative methods, particularly when it comes to W% estimators, which along with run estimators are my own personal favorite sabermetric tools.

But the third reason is the one that I want to focus on here, which is that a W% estimator based on an underlying estimate of the run distribution is from one perspective the simplest possible estimator. This may seem to be an absurd statement given all of the steps that are necessary to compute Enby estimates, let alone plugging these into a W% formula. But from a first principles standpoint, the distribution-based W% estimator is the simplest to explain, because it is defined by the laws of the game itself.

If you score no runs, you don’t win. If you score one run, you win if you allow zero runs. If you score two runs, you win if you allow either zero or one run, and on it goes ad infinitum. If at the end of nine innings you have scored and allowed an equal number of runs, you play on until there is an inning in which an unequal, greater than zero number of runs are scored. This fundamental identity is what all of the other W% estimators attempt to approximate, the mechanics which they attempt to sweep under the rug by taking shortcuts to approximate. The distribution-based approach is computationally dense but conceptually easy (and correct). Of course, to bring points one and three together, the definition may be correct, but the resulting estimates are useless if the underlying model (Enby in this case) does not work.

In order to produce our W% estimate, we first need to use Enby to estimate the scoring distribution for the two teams. This is not as simple as using the Enby parameters we have already developed based on the Tango Distribution with c = .767. Tango has found that his method produces more accurate results for two teams when c is set equal to .852 instead.

In the previous post, I walked through the computations for the Enby distribution with any c value, so this is an easy substitution to make. But why is it necessary? I don’t have a truly satisfactory answer to that question--it's trite to just assert that it works better for head-to-head matchups because of the covariance between runs scored and runs allowed, even if that is in fact the right answer.

How will modifying the control value alter the Enby distribution? All of the parameters will be effected, because all depend on the control value in one way or another. First, B and r (the latter as it is initially figured before zero modification):

VAR = RG^2/9 + (2/c - 1)*RG

r = RG^2/(VAR - RG)

B = VAR/RG - 1

When c is larger, the variance of runs scored will be smaller. We can see this by examining the equations for variance with c = .767 and .852:

VAR (.767) = RG^2/9 + 1.608*RG

VAR (.852) = RG^2/9 + 1.347*RG

This results in a larger value for r and a smaller value for B, but these parameters don’t have an intuitive baseball explanation, unlike variance. It’s difficult to explain (for me at least) why variance of a single team’s runs scored should be lower when considering a head-to-head matchup, but that’s the way it works out.

It should be noted that if the sole purpose of this exercise is to estimate W%, we don’t have to care whether the actual probability of each team scoring k runs is correct. All we need to do is have an accurate estimate of how often Team A’s runs scored are greater than Team B’s.

By increasing c, we also reduce the probability of a shutout, as can be seen from the formula for z:

z =(RI/(RI + c*RI^2))^9

Originally, I had intended to display some graphs showing the behavior of the three parameters by RG with each choice of c, but these turned out to be not of any particular interest. I ran similar graphs earlier in the series with parameters based on the earlier variance model, and the shape of the resulting functions are quite similar. The only real visual difference when c varies is what appears to be linear shifts for r and B (the B shift is linear, the r not quite).

What might be more interesting is looking at how c shapes the estimated run distribution for a team with a given RG. I’ll look at three teams--one average (4.5 RG), one extremely low-scoring (2.25 RG), and one extremely high-scoring (9 RG). First, the 4.5 RG team:

As you may recall from earlier, Enby consistently overestimates the frequency with which a normal major league team will score 2-4 runs. Using the .852 c value exacerbates this issue; in fact, the main thing to take away from this set of graphs is that the higher c value clusters more probability around the mean, while the lower c value leaves more probability for the tails.

The 2.25 RG team:

And the 9 RG team:

## Tuesday, August 22, 2017

### Enby Distribution, pt. 4: Revisiting W%

## Thursday, August 10, 2017

### Bottoming Out

On June 5, OSU Athletic Director Gene Smith unceremoniously fired Thad Matta, the winningest men’s basketball coach in the history of the school. He did so months after the normal time to fire coaches had passed, and he did so in a way that ensured that the end of Matta’s tenure would be the dominant story in college basketball over the next week. Matta won four regular season Big Ten championships, went to two Final Fours, and was as close to universally respected and beloved by his former players as you will ever find in college basketball. He did all of this while dealing with a debilitating condition that made routine tasks like walking and taking off his shoes a major challenge; it was a side effect of a surgery performed at the university’s own hospital. OSU was coming off a pair of seasons without making the NCAA Tournament, but basketball is a sport in which a roster can get turned around in a hurry, and this author feels that Matta had more than earned another year or two in which to have the opportunity to do just that. Gene Smith felt otherwise.

On May 20, the OSU baseball team lost to Indiana 4-3 at home. This brought an end to a season in which they went 22-34, the school’s worst record since going 6-12 in 1974. They went 8-16 in the Big Ten, the worst showing since going 4-12 in 1987. The season brought Greg Beals’ seven-year record at OSU to 225-167 (.574) and his Big Ten record to 85-83 (.506). Setting aside 2008-2014, a seven-year stretch in which OSU had a .564 W% (since four of the seasons were coached by Beals), the seven-year record is OSU’s worst since 1986-1992. The seven-year stretch in the Big Ten is the worst since 1984-1990 (.486). The Buckeyes finished eleventh in the Big Ten, which in fairness wasn’t possible until the addition of Nebraska, but since the Big Ten eliminated divisions in 1988, the lowest previous conference standing had been seventh (out of 10 in 2010, out of 11 in 2014, out of 13 in 2015).

The OSU season is hardly worth recapping in detail, except to point out that baseball is such that Oregon State could go 56-6 on the year let have one of those losses come to the Buckeyes (February 24, 6-1; the Beavers won a rematch 5-1 two days later). The other noteworthy statistical oddity is that in eight Big Ten series, Ohio won just one (2-1 at Penn State). They were swept once (home against Minnesota) and the other six were all 1-2 for the opposition. The top eight teams in the conference qualify for the tournament; OSU finished four games out of the running, eliminated even before the final weekend.

The Buckeyes’ .393 overall W% and .412 EW% were both eleventh of thirteen Big Ten teams (the forces of darkness led at .724 and .748 respectively), and their .463 PW% was eighth (again, the forces of darkness led with .699). OSU was twelfth with 5.07 R/G and tenth with 6.05 RA/G, although Bill Davis Staidum is a pitcher’s park and those are unadjusted figures. OSU’s .659 DER was last in the conference.

None of this was surprising; OSU lost a tremendous amount of production from 2016, which was Beals’ most successful team, notching his only championship (Big Ten Tournament) and NCAA appearance. With individual exceptions, outside of the 2016 draft class, Beals has failed to recruit and develop talent, often patching his roster with copious amounts of JUCO transfers rather than underclassmen developed in the program. Never was this more acute than in 2017. None of this is meant to be an indictment of the players, who did the best they could to represent their school. It is not their fault that the coach put them in situations that they couldn’t handle or weren’t ready for.

Sophomore catcher Jacob Barnwell had a solid season, hitting .254/.367/.343 for only -1 RAA; his classmate and backup Andrew Fishel only got 50 PA but posted a .400 OBA. First base/DH was a real problem position, as senior Zach Ratcliff was -8 RAA and JUCO transfer junior Bo Coolen chipped in -6; both had secondary averages well below the team average. Noah McGowan, another JUCO transfer started at second (and got time in left as well), with -3 RAA in 162 PA before getting injured. True freshman Noah West followed him into the lineup, but a lack of offense (.213/.278/.303 in 105 PA) gave classmate Connor Pohl a shot. Pohl is 6’5” and his future likely lies at third, but his bat gave a boost to the struggling offense (.325/.386/.450 in 89 PA).

Senior Jalen Washington manned shortstop and acquitted himself fine defensively and at the plate (.266/.309/.468), and was selected by San Diego in the 28th round. Sophomore third baseman Brady Cherry did not build on the power potential his freshman year seemed to show, hitting four homers in 82 more PA than he had when he hit four in 2016. His overall performance (.260/.333/.410) was about average (-2 RAA).

Outfield was definitely the bright spot for the offense, despite getting little production out of JUCO transfer Tyler Cowles (.190/.309/.314 in 129 PA). Senior Shea Murray emerged from a pitching career marred by injuries to provide adequate production and earn the left field job (.252/.331/.449, 0 RAA) and was drafted in the 18th round by Pittsburgh, albeit as a pitcher. Junior center fielder Tre’ Gantt was the team MVP, hitting .314/.426/.426, leading the team with 18 RAA, and was drafted in the 29th round by Cleveland. True freshman right fielder Dominic Canzone was also a key contributor, challenging for the Big Ten batting average lead (.343/.398/.458 for 8 RAA).

On the mound, OSU never even came close to establishing a starting rotation due to injuries and ineffectiveness. Nine pitchers started a game, and only one of them had greater than 50% of his appearances as a starter. That was senior Jake Post, who went 1-7 over 13 starts with a 6.41 eRA. Sophomore lefty Connor Curlis was most effective, starting eight times for +3 RAA with 8.3/2.7 K/W. He tied for team innings lead with classmate Ryan Feltner, who was -13 RAA with a 6.71 eRA. Junior Yianni Pavloupous, the closer a year ago, was -10 RAA over 40 innings between both roles. Junior Adam Niemeyer missed time with injuries, appearing in just ten games (five starts) for -3 RAA over 34 innings. Freshman Jake Vance was rushed into action and allowed 20 runs and walks in 26 innings (-4 RAA). And JUCO transfer Reece Calvert gave up a shocking 39 runs in 39 innings.

I thought the bullpen would be the strength of the team before the season. In the case of Seth Kinker, I was right. The junior slinger was terrific, pitching 58 innings (21 relief appearances, 3 starts) and leading the team by a huge margin with 13 RAA (8.4/2.0 K/W). But the rest of the bullpen was less effective. Junior Kyle Michalik missed much of the season with injuries and wasn’t that effective when on the mound (6.85 RA and just 4.8 K/9 over 22 innings). Senior Joe Stoll did fine in the LOOGY role, something Beals has brought to OSU, with 3 RAA in 23 innings over 25 appearances. Junior Austin Woodby had a 6.00 RA over 33 innings but deserved better with a 4.79 eRA and 5.5/1.8 K/W. The only other reliever to work more than ten innings was freshman sidearmer Thomas Waning (3 runs, 11 K, 4 W over 12 innings). Again, it’s hard to describe the roles because almost everyone was forced to both start and relieve.

It’s too early to hazard a prognosis for 2018, but given the lack of promising performances from young players, it’s hard to be optimistic. What remains to be seen is whether Smith’s ruthlessness can be transferred from coaches who do not deserve it to those who have earned it in spades. No, baseball is not a revenue sport, and no, baseball is not bringing the athletic department broad media exposure. But when properly curated, the OSU baseball program is a top-tier Big Ten program, with the potential to make runs in the NCAA Tournament, and bring in more revenue than most of the “other” 34 programs that are not football or men’s basketball. Neglected in the hands of a failed coach, it is capable of putting up a .333 W% in conference play. Smith, not Beals, is the man who will most directly impact the future success of the program.

## Wednesday, July 12, 2017

### Enby Distribution, pt. 3: Enby Distribution Calculator

At this point, I want to re-explain how to use the Enby distribution, step-by-step. While I already did this in part 6 of the original series, I now have the new variance estimator as found by Alan Jordan to plug in, and so to avoid any confusion and to make this is easy if anyone ever wants to implement it themselves, I will recount it all in one location. I will also re-introduce a spreadsheet that you can use to estimate the probability of scoring X runs based on the Enby distribution.

Step 1: Estimate the variance of runs scored per game (VAR) as a function of mean runs/game (RG):

VAR = RG^2/9 + (2/c - 1)*RG

where c is the control value from the Tango Distribution. For normal applications, we’ll assume that c = .767.

Step 2: Use the mean and variance to estimate the parameters (r and B) of the negative binomial distribution:

r = RG^2/(VAR - RG)

B = VAR/RG - 1

B will be retained as a parameter for the Enby distribution.

Step 3: Find the probability of zero runs scored as estimated by the negative binomial distribution (we’ll call this value a):

a = (1 + B)^(-r)

Step 4: Use the Tango Distribution to estimate the probability of being shutout. This will become the Enby distribution parameter z:

z =(RI/(RI + c*RI^2))^9

where RI is runs/inning, which we’ll estimate as RG/9.

Step 5: Use trial and error to estimate a new value of r given the modified value at zero. B and z will stay constant, but r must be chosen so as to ensure that the correct mean RG is returned by the Enby distribution. Use the following formula to estimate the probability of k runs scored per game using the non-modified negative binomial distribution:

q(0) = a

q(k) = (r)(r + 1)(r + 2)(r + 3)…(r + k - 1)*B^k/(k!*(1 + B)^(r + k)) for k >=1

Then modify by taking:

p(0) = z

p(k) = (1 - z)*q(k)/(1 - a)for k >=1

The mean is calculated as:

mean = sum (from k = 1 to infinity) of (k*p(k)) = p(1) + 2*p(2) + 3*p(3) + ...

Now you have the parameters r, B, and z and the probability of scoring k runs in a game.

I previously published a spreadsheet that provided the approximate Enby distribution parameters at each .05 increment of RG between 3 and 7. The link below will take you to an updated version of this calculator. It is updated in two ways: first, the Tango Distribution estimate of variance developed by Alan Jordan is used as in the example above. Secondly, I have added lines for RG levels between 0-3 and 7-15 RG (at intervals of .25). Previously, you could enter in any value between 3-7 RG and the calculator would round it to nearest .05; now I’m going to make you enter a legitimate value yourself or accept whatever vlookup() gives you.

P(x) is the probability of scoring x runs in a game, P(<= x) is the probability of scoring that many or fewer, and P(> x) is the probability of scoring more than x runs.

Enby Calculator

## Tuesday, June 20, 2017

### Enby Distribution, pt. 2: Revamping the Variance Estimate

All models are approximations of reality, but some are more useful than others. The notion of being able to estimate the runs per game distribution cleanly in one algorithm (rather than patching together runs per inning distributions or using simulators) is one that can be quite useful in estimating winning percentage or trying to distinguish between the effectiveness of team offense beyond similar noting their runs scored total. I’d argue that a runs per game distribution is a fundamentally useful tool in classical sabermetrics.

However, while such a model would be useful, Enby as currently constructed falls well short of being an ideal tool. There are a few major issues:

1) It is not mathematically feasible to solve directly for the parameters of a zero-modified negative binomial distribution, which forces me to use trial and error to estimate Enby coefficients. In doing so, the distribution is no longer able to exactly match the expected mean and variance--instead, I have chosen to match the mean precisely, and hope that the variance is not too badly distorted.

2) The variance that we should expect for runs per game at any given level of average R/G is itself unknown. I developed a simple formula to estimate variance based on some actual team data, but that formula is far from perfect and there’s no particular reason to expect it to perform well outside of the R/G range represented by the data from which it was developed.

3) An issue with run distribution models found by Tom Tango in the course of his research on runs per inning distribution is that the optimal fit for a single team’s distribution may not return optimal results in situations in which two teams are examined simultaneously (such as using the distribution to model winning percentage). One explanation for this phenomenon is the covariance between runs scored and runs allowed in a given game, due to either environmental or strategic causes.

I have recently attempted to improve the Enby distribution by focusing on these obvious flaws. Unfortunately, my findings were not as useful as I had hoped they would be, but I would argue (hope?) that they represent at least small progress in this endeavor.

During the course of writing the original series on this topic, I was made aware of work being done by Alan Jordan, who was developing a spreadsheet that used the Tango Distribution to estimate scoring distributions and winning percentage. One of the underpinnings was that he found (or found work by Darren Glass and Phillip Lowry that demonstrated) that the variance of runs scored per inning as predicted by the Tango Distribution could be calculated as follows (where RI = runs per inning and c is the Tango Distribution constant):

Variance (inning) = RI*(2/c + RI - 1) = RI^2 + (2/c - 1)*RI

Assuming independence of runs per inning (this is a necessary assumption to use the Tango Distribution to estimate runs per game), the variance of runs per game will simply be nine times the variance of runs per inning (assuming of course that there are precisely nine innings per game, as I did in estimating the z parameter of Enby from the Tango Distribution). If we attempt to simply this further by assuming that RI = RG/9, where RG = runs per game:

Variance (game) = 9*(RI^2 + (2/c - 1)*RI) = 9*((RG/9)^2 + (2/c - 1)*RG/9) = RG^2/9 + (2/c - 1)*RG

The traditional value of c used to estimate runs per inning for one team is .767, so if we substitute that for c, we wind up with:

Variance (game) =1.608*RG + .111*RG^2

When I worked on this problem previously, I did not have any theoretical basis for an estimator of variance as a function of RG, so I experimented with a few possibilities and found what appeared to be a workable correlation between mean RG and the ratio of variance to mean. I used linear regression on a set of actual team data (1981-1996) and wound up with an equation that could be written as:

Variance (game) = 1.43*RG + .1345*RG^2

Note the similarities between this equation and the equation based on the Tango Distribution - they both take the form of a quadratic equation less the constant (I purposefully avoided constants in developing my variance estimator so as to avoid unreasonable results at zero and near-zero RG). The coefficients are somewhat different, but the form of the equation is identical.

On one hand, this is wonderful for me, because it vindicates my intuition that this was a reasonable way to estimate variance. On the other hand, this is very disappointing, because I had hoped that Jordan’s insight would allow me to significantly improve the variance estimate. Instead, any gains to be had here are limited to improving the equation by using a more theoretical basis to estimate its coefficients, but there is no change in the form of this equation.

In fact, any revision to the estimator will reduce accuracy over the 1981-96 sample that I am using, since the linear regression already found optimal coefficients for this particular dataset. This by no means should be taken as a claim on my part that the regression-based equation should be used rather than the more theoretically-grounded Tango Distribution estimate, simply an observation that any improvement will not show up given the confines of the data I have at hand.

What about data from out of that set? I have easy access to the four seasons from 2009-2012. In these seasons, major league teams have averaged 4.401 runs per game and the variance of runs scored per game is 9.373. My equation estimates the variance should be 8.90, while the Tango-based formula estimates 9.23. In this case, we could get a near-precise match by using c = .757.

While we know how accurate each estimator is with respect to variance for this case, what happens when we put Enby to use to estimate the run distribution? The Enby parameters for 4.40 RG using my original equation are (B = 1.0218, r = 4.353, z = .0569). If we instead use the Tango estimated variance of 9.23, the parameters become (B = 1.0970, r = 4.041, z = .0569). With that, we can calculate the estimated frequencies of X runs scored using each estimator and compare to the empirical frequencies from 2009-2012:

Eyeballing this, the Tango-based formula is closer for one run, but exacerbates the recurring issue of over-estimating the likelihood of two or three runs. It makes up for this by providing a better estimate at four and five runs, but a worse estimate at six. After that the two are similar, although the Tango estimate provides for more probability in the tail of the distribution, which in this case is consistent with empirical results.

For now, I will move on to another topic, but I will eventually be coming back to this form of the Tango-based variance estimate, re-estimating the parameters for 3-7 RG, and providing an updated Enby calculator, as I do feel that there are distinct advantages to using the theoretical coefficients of the variance estimator rather than my empirical coefficients.

## Tuesday, May 09, 2017

### Enby Distribution, pt. 1: Pioneers

A few years ago, I attempted to demonstrate that one could do a decent job of estimating the distribution of runs scored per game by using the negative binomial distribution, particularly a zero-modified version given the propensity of an unadulterated negative binomial distribution to underestimate the probability of a shutout. I dubbed this modified distribution Enby.

I’m going to be re-introducing this distribution and adopting a modification to the key formula in this series, but I wanted to start by acknowledging that I am not the first sabermetrician to adopt the negative binomial distribution to the matter of the runs per game distribution. To my knowledge, a zero-modified negative binomial distribution had not been implemented prior to Enby, and while the zero-modification is a significant improvement to the model, it would be disingenuous not to acknowledge and provide an overview of the two previous efforts using the negative binomial distribution of which I am aware.

I acknowledged one of these in the original iteration of this series, but inadvertently overlooked the first. In the early issues of Bill James’ __Baseball Analyst__ newsletter, Dallas Adams published a series of articles on run distributions, ultimately developing an unwieldy formula I discussed in the linked post. What I overlooked was an article in the August 1983 edition in which the author noted that the Poisson distribution worked for hockey, it would not work for baseball because the variance of runs per game is not equal to the mean, but rather is twice the mean. But a "modified Poisson" distribution provided a solution.

The author of the piece? Pete Palmer. Palmer is often overlooked to an undue extent when sabermetric history is recounted. While one could never omit Palmer from such a discussion, his importance is often downplayed. But the sheer volume of methods that he developed or refined is such that I have no qualms about naming him the most important technical sabermetrician by a wide margin. Park factors, run to win converters, linear weights, relative statistics, OPS for better or worse, the construct of an overall metric by adding together runs above average in various discrete components of the game...these were all either pioneered or greatly improved by Palmer. And while it is not nearly as widespread in use as his other innovations, you can add using the negative binomial distribution for the runs per game distribution the list.

Palmer says that he learned about this “modified Poisson” in a book called __Facts From Figures__ by Maroney. The relevant formulas were:

Mean (u) = p/c

Variance (v) = u + u/c

p(0) = (c/(1 + c))^p

p(1) = p(0)*p/(1 + c)

p(2) = p(1)*(p + 1)/(2*(1 + c))

p(3) = p(2)*(p + 2)/(3*(1 + c))

p(n) = p(0)*(p*(p + 1)*(p + 2)*...*(p + n - 1)/(n!*(1 + c)^n)

The text that I used renders the negative binomial distribution as:

p(k) = (1 + B)^(-r) for k = 0

p(k) = (r)(r + 1)(r + 2)(r + 3)…(r + k - 1)*B^k/(k!*(1 + B)^(r + k)) for k >=1

mean (u) = r*B

variance(v) = r*B*(1 + B)

You may be forgiven for not immediately recognizing these two as equivalent; I did not at first glance. But if you recognize that r = p and B = 1/c, then you will find that the mean and variance equations are equivalent and that the formulas for each n or k depending on the nomenclature used are equivalent as well.

So Palmer was positing the negative binomial distribution to model runs scored. He noted that the variance of runs per game is about two times the mean, which is true. In my original Enby implementation, I estimated variance as 1.430*mean + .1345*mean^2, which for the typical mean value of around 4.5 R/G works out to an estimated variance of 9.159, which is 2.04 times the mean. Of course, the model can be made more accurate by allowing the ratio

if variance/mean to vary from two.

The second use of the negative binomial distribution to model runs per game of which I am aware was implemented by Phil Melita. Mr. Melita used it to estimate winning percentage and sent me a copy of his paper (over a decade ago, which is profoundly disturbing in the existential sense). Unfortunately, I am not aware of the paper ever being published so I hesitate to share too much from the copy in my possession.

Melita’s focus was on estimating W%, but he did use negative binomial to look at the run distribution in isolation as well. Unfortunately, I had forgotten his article when I started messing around with various distributions that could be used to model runs per game; when I tried negative binomial and got promising results, I realized that I had seen it before.

So as I begin this update of what I call Enby, I want to be very clear that I am not claiming to have “discovered” the application of the negative binomial distribution in this context. To my knowledge using zero-modification is a new (to sabermetrics) application of the negative binomial, but obviously is a relatively minor twist on the more important task of finding a suitable distribution to use. So if you find that my work in this series has any value at all, remember that Pete Palmer and Phil Melita deserve much of the credit for first applying the negative binomial distribution to runs scored per game.

## Thursday, April 13, 2017

## Saturday, April 01, 2017

### 2017 Predictions

All the usual disclaimers. This is not serious business.

AL EAST

1. Boston

2. Toronto (wildcard)

3. New York

4. Baltimore

5. Tampa Bay

I have noted the last couple years that I always pick the Red Sox--last year was one of the years where that was the right call. Boston has question marks, and they have less talent on hand to fill holes than in past years, but no one else in the division is making a concerted push with the Blue Jays retrenching and the Yankees in transition. While much has been made of the NL featuring more of a clear dichotomy between contenders and rebuilders, the AL features three strong division favorites and a void for wildcard contention that Toronto may well once again fill. New York looks like a .500 team to me, and one with as strong a recent history of overperforming projections/Pythagorean as darlings like Baltimore and Kansas City, but get far less press for it. (I guess the mighty Yankees aren’t a good sell as a team being unfairly dismissed by the statheads). The Orioles offense has to take step back at some point with only Machado and Schoop being young, and if that happens the rotation can’t carry them. It’s not that I think the Rays are bad; this whole division is filled with potential wildcard contenders.

AL CENTRAL

1. Cleveland

2. Detroit

3. Kansas City

4. Minnesota

5. Chicago

I have a general policy of trying to pick against the Indians when reasonable, out of irrational superstition and an attempt to counteract any unconscious fan-infused optimism. Last year I felt they were definitely the best team in this division on paper but picked against them regardless. But the gap is just too big to ignore this season, so I warily pick them in front. There are reasons to be pessimistic--while they didn’t get “every break in the world last season” as Chris Russo says in a commercial that hopefully will be off the air soon, it’s easy to overstate the impact of their pitching injuries since the division was basically wrapped up before the wheels came off the rotation. Consider the volatility of bullpens, the extra workload for the pitchers who were available in October, the fact that the two that weren’t aren’t the best health bets in the world, and you can paint a bleaker picture than the triumphalism that appears to be the consensus. On the other hand, Michael Brantley, the catchers, the fact that the offense didn’t score more runs than RC called for last year. I see them as the fourth-strongest team out of the six consensus division favorites. Detroit is the team best-positioned to challenge them; I used the phrase “dead cat bounce” last year and it remains appropriate. The less said about Kansas City the better, but as much fun as it was to watch the magic dissipate last season, the death throes of this infuriating team could be even better. The Twins have famously gone from worst to first in their franchise history; given the weakness of the division and some young players who may be much better than they’ve shown so far, it’s not that far-fetched, but it’s also more likely that they lose 95 again. The White Sox rebuilding might succeed in helping them compete down the road and finally ridding the world of the disease that is Hawk Harrelson.

AL WEST

1. Houston

2. Seattle (wildcard)

3. Los Angeles

4. Texas

5. Oakland

Houston looks really good to me; if their rotation holds together (or if they patch any holes with the long awaited Jose Quintana acquisition), I see them as an elite team. Maybe the third time is the charm picking Seattle to win the wildcard. Truth be told, I find it hard to distinguish between most AL teams including the middle three in this division. Picking the Angels ahead of the Rangers is more a way to go on record disbelieving that the latter can do it again than an endorsement of the former, but even with a shaky rotation the Angels should be respectable. My Texas pick will probably look terrible when Nomar Mazara breaks out, Yu Darvish returns healthy, and Josh Hamilton rises from the dead or something. Oakland’s outlook for this year looks bleak, but am I crazy to have read their chapter in __Baseball Prospectus__ and thought there were a number of really interesting prospects who could have a sneaky contender season in 2018? Probably.

NL EAST

1. Washington

2. New York (wildcard)

3. Miami

4. Atlanta

5. Philadelphia

It’s very tempting to pick New York over Washington, based on the superficial like the Nationals sad-Giants even year pattern and cashing in most of their trade chits for Adam Eaton, but there remains a significant on-paper gap between the two. Especially since the Mets stood pat from a major league roster perspective. This might be the best division race out there in a season in which there are six fairly obvious favorites. Sadly, Miami is about one 5 WAR player away from being right in the mix…I wonder where on might have found such a player? Atlanta seems like a better bet than Philadelphia in both the present and future tense, but having a great deal of confidence in the ordering of the two seems foolhardy.

NL CENTRAL

1. Chicago

2. Pittsburgh

3. St. Louis

4. Milwaukee

5. Cincinnati

The Cubs’ starting pitching depth is a little shaky? Kyle Schwarber doesn’t have a position and people might be a little too enthusiastic about him? Hector Rondon struggled late in the season and Wade Davis’ health is not a sure thing? These are the straws that one must grasp at to figure out how Chicago might be defeated. You also have to figure out whether Pittsburgh can get enough production from its non-outfielders while also having some good fortune with their pitching. Or whether St. Louis’ offense is good enough. Or whether Milwaukee or Cincinnati might have a time machine that could jump their rebuild forward a few years. You know, the normal questions you ask about a division.

NL WEST

1. Los Angeles

2. San Francisco

3. Arizona

4. Colorado

5. San Diego

Last year I picked the Giants over the Dodgers despite the numbers suggesting otherwise because of injury concerns. I won’t make that mistake again, as it looks as if LA could once again juggle their rotation and use their resources to patch over any holes. The Giants are strong themselves, but while the two appear close in run prevention, the Dodgers have the edge offensively. The Diamondbacks should have a bounce back season, but one that would still probably break Tony LaRussa’s heart if he still cared. The Rockies seem like they should project better than they do, with more promise on the mound than they usually do. The Padres are the consensus worst team in baseball from all of the projection systems, which can be summed up with two words: Jered Weaver.

WORLD SERIES

Los Angeles over Houston

Just about every projection system out there has the Dodgers ever so slightly ahead of the Cubs. That of course does not mean they are all right--perhaps there is some blind spot about these teams that player projection systems and/or collation of said projections into team win estimates share in common. On the other hand, none of these systems *dislike* the Cubs—everyone projects them to win a lot of games. I was leaning towards picking LA even before I saw that it was bordering on a consensus, because the two teams look fairly even to me but the Dodgers have more depth on hand, particularly in the starting pitching department (the natural rebuttal is that the Dodgers are likely to need that depth, while the Cubs have a four pretty reliable starters). The Dodgers bullpen looks better, and their offense is nothing to sneeze at.

AL Rookie of the Year: LF Andrew Benintendi, BOS

AL Cy Young: Chris Sale, BOS

AL MVP: CF George Springer, HOU

NL Rookie of the Year: SS Dansby Swanson, ATL

NL Cy Young: Stephen Strasburg, WAS

NL MVP: 1B Anthony Rizzo, CHN

## Tuesday, March 14, 2017

### Win Value of Pitcher Adjusted Run Averages

The most common class of metrics used in sabermetrics for cross-era comparisons use relative measures of actual or estimated runs per out or sother similar denominator. These include ERA+ for pitchers and OPS+ or wRC+ for batters (OPS+ being an estimate of relative runs per out, wRC+ using plate appearances in the denominator but accounting for the impact of avoiding outs). While these metrics provide an estimate of runs relative to the league average, they implicitly assume that the resulting relative scoring level is equally valuable across all run environments.

This is in fact not the case, as it is well-established that the relationship between run ratio and winning percentage depends on the overall level of run scoring. A team with a run ratio of 1.25 will have a different expected winning percentage if they play in a 9 RPG environment than if they play in a 10 RPG environment. Metrics like ERA+ and OPS+ do not translate relative runs into relative wins, but presumably the users of such metrics are ultimately interested in what they tell us about player contribution to wins.

There are two key points that should be acknowledged upfront. One is that the difference in win value based on scoring level is usually quite small. If it wasn’t, winning percentage estimators that don’t take scoring level into account would not be able to accurately estimate W% across the spectrum of major league teams. While methods that do consider scoring level are more accurate estimators of W% than similar methods that don’t, a method like fixed exponent Pythagorean can still produce useful estimates despite maintaining a fixed relationship between runs and wins.

The second is that players are not teams. The natural temptation (and one I will knowingly succumb to in what follows) is to simply plug the player’s run ratio into the formula and convert to a W%. This approach ignores the fact that an individual player’s run rate does not lead directly to wins, as the performance of his teammates must be included as well. Pitchers are close, because while they are in the game they are the team (more accurately, their runs allowed figures reflect the totality of the defense, which includes contributions from the fielders), but even ignoring fielding, non-complete games include innings pitched by teammates as well.

For the moment I will set that aside and instead pretend (in the tradition of Bill James’ Offensive Winning %) that a player or pitcher’s run ratio can or should be converted directly to wins, without weighting the rest of the team. This makes the figures that follow something of a freak show stat, but the approach could be applied directly to team run ratios as well. Individuals are generally more interesting and obviously more extreme, which means that the impact of considering run environment will be overstated.

I will focus on pitchers for this example and will use Bob Gibson’s 1968 season as an example. Gibson allowed 49 runs in 304.2 innings, which works out to a run average of 1.45 (there will be some rounding discrepancies in the figures). In 1968 the NL average RA was 3.42, so Gibson’s adjusted RA (aRA for the sake of this post) is RA/LgRA = .423 (ideally you would park-adjust as well, but I am ignoring park factors for this post). As an aside, please resist the temptation to instead cite his RA+ of 236 instead. Please.

.423 is a run ratio; Gibson allowed runs at 42.3% of the league average. Since wins are the ultimate unit of measurement, it is tempting to convert this run ratio to a win ratio. We could simply square it, which reflects a Pythagorean relationship. Ideally, though, we should consider the run environment. The 1968 NL was an extremely low scoring league. Pythagenpat suggests that the ideal exponent is around 1.746. Let’s define the Pythagenpat exponent to use as:

x = (2*LgRA)^.29

Note that this simply uses the league scoring level to convert to wins; it does not take into account Gibson’s own performance. That would be an additional enhancement, but it would also strongly increase the distortion that comes from viewing a player as his own team, albeit less so for pitchers and especially those who basically were pitching nine innings/start as in the case of Gibson.

So we could calculate a loss ratio as aRA^x, or .223 for Gibson. This means that a team with Gibson’s aRA in this environment would be expected to have .223 losses for every win (basic ratio transformations apply; the reciprocal would be the win ratio, the loss ratio divided by (1 + itself) would be a losing %, the complement of that W%, etc.)

At this point, many people would like to convert it to a W% and stop there, but I’d like to preserve the scale of a run average while reflecting the win impact. In order to do so, I need to select a Pythagorean exponent corresponding to a reference run environment to convert Gibson’s loss ratio back to an equivalent aRA for that run environment. For 1901-2015, the major league average RA was 4.427, which I’ll use as the reference environment, which corresponds to a 1.882 Pythagenpat exponent (there are actually 8.94 IP/G over this span, so the actual RPG is 8.937 which would be a 1.887 exponent--I'll stick with RA rather than RPG for this example since we are already using it to calculate aRA).

If we call that 1.882 exponent r, then the loss ratio can be converted back to an equivalent aRA by raising it to the (1/r) power. Of course, the loss ratio is just an interim step, and this is equivalent to:

aRA^(x*(1/r)) = aRA^(x/r) = waRA

waRA (excuse the acronyms, which I don’t intend to survive beyond this post) is win-Adjusted Run Average. For Gibson, it works out to .450, which illustrates how small the impact is. Pitching in one of the most extreme run environments in history, Gibsons aRA is only 6.4% higher after adjusting for win impact.

In 1994, Greg Maddux allowed 44 runs in 202 innings for a run average of 1.96. Pitching in a league with a RA of 4.65, his aRA was .421, basically equal to Gibson. But his waRA was better, at .416, since the same run ratio leads to more wins in a higher scoring environment.

It is my guess that consumers of sabermetrics will generally find this result unsatisfactory. There seems to be a commonly-held belief that it is easier to achieve a high ERA+ in a higher run scoring environment, but the result of this approach is the opposite--as RPG increases, the win impact of the same aRA increases as well. Of course, this approach says nothing about how “easy” it is to achieve a given aRA--it converts aRA to an win-value equivalent aRA in a reference run environment. It is possible that it could be simultaneously “easier” to achieve a low aRA in a higher scoring environment and that the value of a low aRA be enhanced in a higher scoring environment. I am making no claim regarding the impressiveness or aesthetic value, etc. of any pitcher’s performance, only attempting to frame it in terms of win value.

Of course, the comparison between Gibson and Maddux need not stop there. I do believe that waRA shows us that Maddux’ rate of allowing runs was more valuable in context than Gibson’s, but there is more to value than the rate of allowing runs. Of course we could calculate a baselined metric like WAR to value the two seasons, but even if we limit ourselves to looking at rates, there is an additional consideration that can be added.

So far, I’ve simply used the league average to represent the run environment, but a pitcher has a large impact on the run environment through his own performance. If we want to take this into account, it would be inappropriate to simply use LgRA + pitcher’s RA as the new RPG to plug into Pythagenpat; we definitely need to consider the extent to which the pitcher’s teammates influence the run environment, since ultimately Gibson’s performance was converted into wins in the context of games played by the Cardinals, not a hypothetical all-Gibson team. So I will calculate a new RPG instead by assuming that the 18 innings in a game (to be more precise for a given context, two times the league average IP/G) is filled in by the pitcher’s RA for his IP/G, and the league’s RA for the remainder.

In the 1968 NL, the average IP/G was 9.03 and Gibson’s 304.2 IP were over 34 appearances (8.96 IP/G), so the new RPG is 8.96*1.45/9 + (2*9.03 - 8.96)* 3.42/9 = 4.90 (rather than 6.84 previously). This converts to a Pythagenpat exponent of 1.59, and an pwaRA (personal win-Adjusted Run Average?) of .485. To spell that all out in a formula:

px = ((IP/G)*RA/9 + (2*Lg(IP/G) - IP/G)*LgRA/9) ^ .29

pwaRA = aRA^(px/r)

Note that adjusting for the pitcher’s impact on the scoring context reduces the win impact of effective pitchers, because as discussed earlier, lowering the RPG lowers the Pythagenpat exponent and makes the same run ratio convert to fewer wins. In fact, considering the pitcher’s effect on the run environment in which he operates actually brings most starting pitchers’ pwaRA closer to league average than their aRA is.

pwaRA is divorced from any real sort of baseball meaning, though, because pitchers aren’t by themselves a team. Suppose we calculated pwaRA for two teammates in a 4.5 RA league. The starter pitches 6 innings and allows 2 runs; the reliever pitches 3 innings and allows 1. Both pitchers have a RA of 3.00, and thus identical aRA (.667) or waRA (.665). Furthermore, their team also has a RA of 3.00 for this game, and whether figured as a whole or as the weighted average of the two individuals, the team also has the same aRA and waRA.

However, if we calculate the starter’s pwaRA, we get .675, while the reliever is at .667. Meanwhile, the team has a pwaRA of .679, which makes this all seem quite counterintuitive. But since all three entities have the same RA, the lower the run environment, the less win value it has on a per inning basis.

I hope this post serves as a demonstration of the difficulty of divorcing a pitcher’s value from the number of innings he pitched. Of course, the effects discussed here are very small, much smaller than the impact of other related differences, like the inherent statistical advantage of pitchers over shorter stints, attempts to model differences in replacement level between starters and relievers, and attempts to detect/value any beneficial side effects of starters working deep into games.

One of my long-standing interests has been the proper rate stat to use to express a batter’s run contribution (I have been promising myself for almost as long as this blog has been existence that I will write a series of posts explaining the various options for such a metric and the rationale for each, yet have failed to do so). I’ve never had the same pull to the question for pitchers, in part because the building block seems obvious: runs/out (which depending on how one defines terms can manifest itself as RA, ERA, component ERA, FIP-type metrics, etc.)

But while there are a few adjustments that can theoretically made between a hitter’s overall performance expressed as a rate and a final value metric (like WAR), the adjustments (such as the hitter’s impact on his team’s run scoring beyond what the metric captures itself, and the secondary effect that follows on the run/win conversion) are quite minor in scale compared to similar adjustments for pitchers. While the pitcher (along with his fielders) can be thought as embodying the entire team while he is the game, that also means that said unit’s impact on the run/win conversion is significant. And while there are certainly cases of batters whose rates may be deceiving because of how they are deployed by their managers (particularly platooning), the additional playing time over which a rate is spread increases value in a WAR-like metric without any special adjustment. Pitchers’ roles and secondary effects thereof (like any potential value generated by “eating” innings) have a more significant (and more difficult to model) impact on value than the comparable effects for position players.

## Monday, February 13, 2017

### Rebuilding a Strip Mall

"Rebuilding", as commonly thrown around in sports discussions, is an interesting term. It inherently implies that something had been built on the same spot previously. It does not, however, give an indication whether what was built there was a blanket fort or the Taj Mahal, a strip mall or the Sears Tower. If one rebuilds on the site of a strip mall, does "re-" imply they are building another strip mall, or might they be building something else?

The baseball program that Greg Beals has presided over for six seasons at The Ohio State University has been much more of a strip mall than a Sears Tower. After his most successful season, which saw OSU tie for third in the Big Ten regular season, win the Big Ten Tournament, and qualify for their first NCAA regional since 2009, Beals is now faced with a rebuilding project in the classic sports sense. Of the nine players with the most PA in 2016, OSU must replace seven, so it would be fair to say that there will be seven new regulars. OSU must also replace two of its three weekend starters; the bullpen is the only area of the roster not decimated by graduation and the draft.

Note: The discussion of potential player roles that follows is my own opinion, informed by my own knowledge of the players and close watching of the program and information released by the SID, particularly the season preview posted here.

Sophomore Jacob Barnwell will almost certainly be the primary catcher; he played sparingly last season (just 29 PA). This is one of the few open positions not due to loss, but rather to a position switch which will be discussed in a moment. Classmate Andrew Fishel (8 PA) will serve as his backup.

First base/DH will be shared by senior Zach Ratcliff, who has flashed power at times during his career but has never earned consistent playing time, and Boo Coolen, a junior Hawaii native who played at Cypress CC in California. Junior Noah McGowan, a transfer from McLennan CC in Texas, would appear to have the inside track at the keystone; his JUCO numbers are impressive but come with obvious caveats. Sophomore Brady Cherry, who got off to a torrid start in 2016 but then cooled precipitously (final line .218/.307/.411 in 143 PA) is likely to play third and bat in the middle of the order. At shortstop, senior captain Jalen Washington moves out from behind the plate to captain the infield; he spent his first two years as a Buckeye as a utility infielder, so it was the move to catcher, not to shortstop that really stands out. Unfortunately, Washington didn’t offer much with the bat as a junior (.249/.331/.343 in 261 PA). Other infield contenders include true freshman shortstop Noah West, redshirt freshman middle infielder Casey Demko, true freshman Conor Pohl at the corners, and redshirt sophomore Nate Romans and redshirt freshman Matt Carpenter in utility roles.

The one thing that appears clear in the outfielder is that junior Tre’ Gantt will take over as center fielder; he struggled offensively last season (.255/.311/.314 in 158 PA). True freshman Dominic Canzone may step in right away in right field, while left field/DH might be split between a pair of transfers. Tyler Cowles, a junior Columbus native who hit well at Sinclair CC in Georgia will attempt to join Coolen and satisfy the Beals’ desperate need for bats with experience and power. Other outfielders include senior former pitcher Shea Murray and little-used redshirt sophomore Ridge Winand.

The pitching staff is slightly more intact, but not much so. Redshirt junior captain Adam Niemeyer will likely be the #1 starter as the only returning weekend starter; his 2016 campaign can be fairly described as average. Sophomore Ryan Feltner was the #4 starter last year and so is a safe bet to pitch on the weekend; his 5.67 eRA was not encouraging but 8 K/3.9 W suggest some raw, harness-able ability. The third spot will apparently go to an erstwhile reliever. Junior Yianni Pavlopoulos was a surprising choice as closer last year, but pitched very well (10.3 K/3.3 W, 3.72 eRA), while senior Jake Post returns from a season wiped out by injury. Neither pitcher has been the picture of health throughout their careers, but Pavlopoulos seems the more likely choice to start. Junior Austin Woodby (7.75 eRA in 39 innings) and sophomore lefty Connor Curlis (six relief innings) will jockey for weekday assignments along with junior JUCO transfer Reece Calvert (a teammate of McGowan) and three true freshmen: lefty Michael McDonough and righties Collin Lollar and Gavin Lyon.

The bullpen will be well-stocked, even assuming Pavlopoulos takes a spot in the rotation. Junior sidearmer Seth Kinker was a workhorse (team-high 38 appearances) and behind departed ace Tanner Tully was arguably Ohio’s most valuable pitcher in 2016. Senior Jake Post will return from a season lost to injury looking to return to a setup role, and junior sidearmer Kyle Michalik pitched well in middle relief last season. These four form a formidable bullpen that will almost certainly be augmented by a lefty specialist, a favorite of Beals. He’ll choose from senior Joe Stoll (twelve unsuccessful appearances), true freshman Andrew Magno, and the favorite in my book is Curlis should be not best Woodby for a starting spot. It appears that sophomore JUCO transfer Thomas Waning (also a sidearmer; one of the few positives about Beals as a coach is his affinity for sidearmers). Other right-handed options for the pen will include junior Dustin Jourdan (a third JUCO transfer from McLennan), sophomore Kent Axcell (making the jump from the club team), and true freshman Jake Vance.

The non-conference schedule is again rather unambitious. The season opens the weekend of February 17 in central Florida with neutral site games against Kansas State (two), Delaware, and Pitt. Two games each against Utah and Oregon State in Arizona will follow as part of the Big Ten/Pac 12 challenge. The Bucks will then play true road series in successive weekends against Campbell and Florida Gulf Coast, then play midweek neutral site games in Port Charlotte, FL against Lehigh and Bucknell. The home schedule opens March 17 with a weekend series against Xavier (the Sunday finale being played in at XU), and the next two weekends see the Buckeyes open Big Ten play by hosting Minnesota and Purdue.

Subsequent weekend series are at Penn State, at Michigan State, home against UNC-Greensboro, home against Nebraska, at the forces of evil, at Iowa, and home against Indiana. Midweek opponents are Youngstown State, OU, Kent State, Cincinnati, Eastern Michigan, Northern Kentucky, Texas Tech (two), Bowling Green, Ball State, and Toledo, all at home, giving OSU 28 scheduled home dates.

Should OSU finish in the top eight in the Big Ten, the Big Ten Tournament is shifting from the recent minor league/MLB/CWS venues (including Huntington Park in Columbus, Target Field, and TD Ameritrade Park in Omaha) to campus sites, although scheduled in advance instead of at the home park of the regular season champ as was the case for many years in the past. This year’s tournament will be in Bloomington, and it speaks to both the volume of players lost and Beals’ uninspiring record that participation in this event should not be taken for granted.

## Thursday, February 09, 2017

### Simple Extra Inning Game Length Probabilities

With the recent news that MLB will be testing starting an extra inning with a runner on second in the low minors, it might be worthwhile to crunch some numbers and estimate the impact on the average length of extra innings game under various base/out situations to start innings. I used empirical data on the probability of scoring X runs in an inning given the base/out situation based on a nifty calculator created by Greg Stoll. Stoll’s description says it is based on MLB games from 1957-2015, including postseason.

Obviously using empirical data doesn’t allow you to vary the run environment…the expected runs for the rest of the inning with no outs, bases empty is .466 so the average R/G here is around 4.2. It also doesn’t account for any behavioral changes due to game situation, as strategy can obviously differ when it is an extra innings situation as opposed to a more mundane point in the game. Plus any quirks in the data are not smoothed over. Still, I think it is a fun exercise to quickly estimate the outcome of various extra inning setups.

These results will be presented in terms of average number of extra innings and probability of Y extra innings assuming that the rule takes effect in the tenth inning (i.e. each extra inning is played under the same rules).

If you know the probability of scoring X runs, assume the two teams are of equal quality, and assume independence between their runs scored (all significant assumptions), then it is very simple to calculate the probabilities of various outcomes in extra innings. If Pa(x) is the probability that team A scores x runs in an inning, and Pb(x) is the probability that team B scores x runs in an inning, then the probability that team A outscores team B in the inning (i.e. wins the game this inning) is:

P(A > B) = Pa(1)*Pb(0) + Pa(2)*[Pb(0) + Pb(1)] + Pa(3)*[Pb(0) + Pb(1) + Pb(2)] + ….

Since we’ve assumed the teams are of equal quality, the probability for team B is the same, just switching the Pas and Pbs. We can calculate the probability of them scoring the same number of runs (i.e. the probability the game extends an additional inning) by taking 1 – P(A > B) – P(B > A) = 1 – 2*P(A >B) since the teams are even, or directly as:

P(A = B) = Pa(0)*Pb(0) + Pa(1)*Pb(1) + Pa(2)*Pb(2) + … = Pa(0)^2 + Pa(1)^2 + Pa(2)^2 + … since the teams are even

I called this P. The probability that game continues past the tenth is equal to P. The probability that the game terminates after the tenth is 1-P. The probability that the game continues past the eleventh is P^2; the probability that the game terminates after the eleventh is P*(1 – P). Continue recursively from here. The average length of the game is 10*P(terminates after 10) + 11*P(terminates after 11) + …

I used Stoll’s data to estimate a few probabilities of game length for a rule that would start each extra innings with the teams in each of the 24 base/out situations. For a given inning-initial base/out situation, P(10) is the probability that the game is over after 10 innings, P(11) the probability it is over after 11 or fewer extra innings, etc. “average” is the average number of innings in an extra inning game played under that rule, and R/I is the average scored in the remainder of the inning from Stoll’s data for teams in that base/out situation.

It will come as no surprise that generally the higher the R/I, the lower the probability of the game continuing is. In a low scoring environment, the teams are more likely to each score zero or one run; as the scoring environment increases, so does the variance (I should have calculated the variance of runs per inning from Stoll’s data to really drive this point home, but I didn’t think of it until after I’d made the tables), and differences in inning run totals between the two teams are what ends extra inning games.

The highlighted roles are bases empty, nobody out (i.e. the status quo); runner at second, nobody out (the proposed MLB rule); runners at first and second, nobody out (the international rule, starting from the eleventh inning; this chart assumes all innings starting with the tenth are played under the same rules, so it doesn’t let you compare these two rules directly); and bases loaded, nobody out, which maximizes the run environment and minimizes the duration of extra innings (making games beyond 12 innings as theoretically rare as games beyond 15 innings are under traditional rules). Of course, these higher scoring innings would take longer to play, so simply looking at the duration of game doesn’t fully address the alleged problems that tinkering with the rules would be intended to solve.

I did separately calculate these probabilities for the international rule--play the tenth inning under standard rules, then start subsequent innings with runners on first and second. It produces longer games than starting with a runner at second in the tenth, which is not surprising.

## Monday, January 30, 2017

### Run Distribution and W%, 2016

Every year I state that by the time this post rolls around next year, I hope to have a fully functional Enby distribution to allow the metrics herein to be more flexible (e.g. not based solely on empirical data, able to handle park effects, etc.) And every year during the year I fail to do so. “Wait ‘til next year”...the Indians taking over the longest World Series title drought in spectacular fashion has now given me an excuse to apply this to any baseball-related shortcoming on my part. This time, it really should be next year; what kept me from finishing up over the last twelve months was only partly distraction but largely perfectionism on a minor portion of the Enby methodology that I think I now have convinced myself is folly.

Anyway, there are some elements of Enby in this post, as I’ve written enough about the model to feel comfortable using bits and pieces. But I’d like to overhaul the calculation of gOW% and gDW% that are used at the end based on Enby, and I’m not ready to do that just yet given the deficiency of the material I’ve published on Enby.

Self-indulgence, aggrandizement, and deprecation aside, I need to caveat that this post in no way accounts for park effects. But that won’t come in to play as I first look at team record in blowouts and non-blowouts, with a blowout defined as 5+ runs. Obviously some five run games are not truly blowouts, and some are; one could probably use WPA to make a better definition of blowout based on some sort of average win probability, or the win probability at a given moment or moments in the game. I should also note that Baseball-Reference uses this same definition of blowout. I am not sure when they started publishing it; they may well have pre-dated by usage of five runs as the delineator. However, I did not adopt that as my standard because of Baseball-Reference, I adopted it because it made the most sense to me being unaware of any B-R standard.

73.0% of major league games in 2015 were non-blowouts (of course 27.0% were). The leading records in non-blowouts:

Texas was much the best in close-ish games; their extraordinary record in one-run games which of course are a subset of non-blowouts was well documented. The Blue Jays have made it to consecutive ALCS, but their non-blowout regular season record in 2015-16 is just 116-115. Also, if you audit this you may note that the total comes to 1771-1773, which is obviously wrong. I used Baseball Prospectus' data.

Records in blowouts:

It should be no surprise that the Cubs were the best in blowouts. Toronto was nearly as good last year, 37-12, for a two-year blowout record of 66-27 (.710).

The largest differences (blowout - non-blowout W%) and percentage of blowouts and non-blowouts for each team:

It is rare to see a playoff team with such a large negative differential as Texas had. Colorado played the highest percentage of blowouts and San Diego the lowest, which shouldn’t come as a surprise given that scoring environment has a large influence. Outside of Colorado, though, the Cubs and the Indians played the highest percentage of blowout games, with the latter not sporting as a high of a W% but having the second most blowout wins.

A more interesting way to consider game-level results is to look at how teams perform when scoring or allowing a given number of runs. For the majors as a whole, here are the counts of games in which teams scored X runs:

The “marg” column shows the marginal W% for each additional run scored. In 2015, the third run was both the run with the greatest marginal impact on the chance of winning, while it took a fifth run to make a team more likely to win than lose. 2016 was the first time since 2008 that teams scoring four runs had a losing record, a product of the resurgence in run scoring levels.

I use these figures to calculate a measure I call game Offensive W% (or Defensive W% as the case may be), which was suggested by Bill James in an old Abstract. It is a crude way to use each team’s actual runs per game distribution to estimate what their W% should have been by using the overall empirical W% by runs scored for the majors in the particular season.

The theoretical distribution from Enby discussed earlier would be much preferable to the empirical distribution for this exercise, but I’ve defaulted to the 2016 empirical data. Some of the drawbacks of this approach are:

1. The empirical distribution is subject to sample size fluctuations. In 2016, all 58 times that a team scored twelve runs in a game, they won; meanwhile, teams that scored thirteen runs were 46-1. Does that mean that scoring 12 runs is preferable to scoring 13 runs? Of course not--it's a quirk in the data. Additionally, the marginal values don’t necessary make sense even when W% increases from one runs scored level to another (In figuring the gEW% family of measures below, I lumped games with 12+ runs together, which smoothes any illogical jumps in the win function, but leaves the inconsistent marginal values unaddressed and fails to make any differentiation between scoring in that range. The values actually used are displayed in the “use” column, and the invuse” column is the complements of these figures--i.e. those used to credit wins to the defense.)

2. Using the empirical distribution forces one to use integer values for runs scored per game. Obviously the number of runs a team scores in a game is restricted to integer values, but not allowing theoretical fractional runs makes it very difficult to apply any sort of park adjustment to the team frequency of runs scored.

3. Related to #2 (really its root cause, although the park issue is important enough from the standpoint of using the results to evaluate teams that I wanted to single it out), when using the empirical data there is always a tradeoff that must be made between increasing the sample size and losing context. One could use multiple years of data to generate a smoother curve of marginal win probabilities, but in doing so one would lose centering at the season’s actual run scoring rate. On the other hand, one could split the data into AL and NL and more closely match context, but you would lose sample size and introduce more quirks into the data.

I keep promising that I will use Enby to replace the empirical approach, but for now I will use Enby for a couple graphs but nothing more.

First, a comparison of the actual distribution of runs per game in the majors to that predicted by the Enby distribution for the 2016 major league average of 4.479 runs per game (Enby distribution parameters are B = 1.1052, r = 4.082, z = .0545):

This is pretty typical of the kind of fit you will see from Enby for a given season: a few important points where there’s a noticeable difference (in this case even tallies two, four, six on the high side and 1 and 7 on the low side), but generally acquitting itself as a decent model of the run distribution.

I will not go into the full details of how gOW%, gDW%, and gEW% (which combines both into one measure of team quality) are calculated in this post, but full details were provided here and the paragraph below gives a quick explanation. The “use” column here is the coefficient applied to each game to calculate gOW% while the “invuse” is the coefficient used for gDW%. For comparison, I have looked at OW%, DW%, and EW% (Pythagenpat record) for each team; none of these have been adjusted for park to maintain consistency with the g-family of measures which are not park-adjusted.

A team’s gOW% is the sumproduct of their frequency of scoring x runs, where x runs from 0 to 22, and the empirical W% of teams in 2015 when they scored x runs. For example, Philadelphia was shutout 11 times; they would not be expected to win any of those games (nor would they, we can be certain). They scored one run 23 times; an average team in 2016 had a .089 W% when scoring one run, so they could have been expected to win 2.04of the 23 games given average defense. They scored two runs 22 times; an average team had a .228 W% when scoring two, so they could have been expected to win 5.02 of those games given average defense. Sum up the estimated wins for each value of x and divide by the team’s total number of games and you have gOW%.

It is thus an estimate of what W% a team with the given team’s empirical distribution of runs scored and a league average defense would have. It is analogous to James’ original construct of OW% except looking at the empirical distribution of runs scored rather than the average runs scored per game. (To avoid any confusion, James in 1986 also proposed constructing an OW% in the manner in which I calculate gOW%).

For most teams, gOW% and OW% are very similar. Teams whose gOW% is higher than OW% distributed their runs more efficiently (at least to the extent that the methodology captures reality); the reverse is true for teams with gOW% lower than OW%. The teams that had differences of +/- 2 wins between the two metrics were (all of these are the g-type less the regular estimate):

Positive: MIA, PHI, ATL, KC

Negative: LA, SEA

The Marlins offense had the largest difference (3.55) between their corresponding g-type W% and their OW%/DW%, so I like to include a run distribution chart to hopefully ease in understanding what this means. Miami scored 4.167 R/G, so their Enby parameters (r = 3.923, B = 1.0706, z = .0649) produce these estimated frequencies:

Miami scored 0-3 runs in 47.8% of their games compared to an expected 47.9%. But by scoring 0-2 runs 3% less often then expected and scoring three 3% more often, they had 1.3 more expected wins from such games than Enby expected. They added an additional 1.2 wins from 4-6 runs, and lost 1.1 from 7+ runs. (Note that the total doesn’t add up to the difference between their gOW% and OW%, nor should it--the comparisons I was making were between what the empirical 2016 major league W%s for each x runs scored predicted using their actual run distribution and their Enby run distribution. If I had my act together and was using Enby to estimate the expected W% at each x runs scored, then we would expect a comparison like the preceding to be fairly consistent with a comparison of gOW% to OW%).

Teams with differences of +/- 2 wins between gDW% and standard DW%:

Positive: CIN, COL, ARI

Negative: NYN, MIL, MIA, TB, NYA

The Marlins were the only team to appear on both the offense and defense list, their defense giving back 2.75 wins when looking at their run distribution rather than run average.

Teams with differences of +/- 2 wins between gEW% and standard EW%:

Positive: PHI, TEX, CIN, KC

Negative: LA, SEA, NYN, MIL, NYA, BOS

The Royals finally showed up on these lists, but turning a .475 EW% into a .488 gEW% is not enough pixie dust to make the playoffs.

Below is a full chart with the various actual and estimated W%s:

## Monday, January 23, 2017

### Crude Team Ratings, 2016

For the last several years I have published a set of team ratings that I call "Crude Team Ratings". The name was chosen to reflect the nature of the ratings--they have a number of limitations, of which I documented several when I introduced the methodology.

I explain how CTR is figured in the linked post, but in short:

1) Start with a win ratio figure for each team. It could be actual win ratio, or an estimated win ratio.

2) Figure the average win ratio of the team’s opponents.

3) Adjust for strength of schedule, resulting in a new set of ratings.

4) Begin the process again. Repeat until the ratings stabilize.

The resulting rating, CTR, is an adjusted win/loss ratio rescaled so that the majors’ arithmetic average is 100. The ratings can be used to directly estimate W% against a given opponent (without home field advantage for either side); a team with a CTR of 120 should win 60% of games against a team with a CTR of 80 (120/(120 + 80)).

First, CTR based on actual wins and losses. In the table, “aW%” is the winning percentage equivalent implied by the CTR and “SOS” is the measure of strength of schedule--the average CTR of a team’s opponents. The rank columns provide each team’s rank in CTR and SOS:

Last year, the top ten teams in CTR were the playoff participants. That was not remotely the case this year thanks to a resurgent gap in league strength. While the top five teams in the AL made the playoffs and the NL was very close, St. Louis slipping just ahead of New York and San Francisco (by a margin of .7 wins if you compare aW%), the Giants ranked only fifteenth in the majors in CTR. Any of the Mariners, Tigers, Yankees, or Astros were considered stronger than the actual NL #3 seed and CTR finisher the Dodgers.

The Dodgers had the second-softest schedule in MLB, ahead of only the Cubs. (The natural tendency is for strong teams in weak divisions to have the lowest SOS, since they don’t play themselves. The flip is also true--I was quite sure without checking to verify that Tampa Bay had the toughest schedule). The Dodgers average opponent was about as good as the Pirates or the Marlins; the Mariners average opponent was rated stronger than the Cardinals.

At this point you probably want to see just how big of a gap there was between the AL and NL in average rating. Originally I gave the arithmetic average CTR for each divison, but that’s mathematically wrong--you can’t average ratios like that. Then I switched to geometric averages, but really what I should have done all along is just give the arithemetic average aW% for each division/league. aW% converts CTR back to an “equivalent” W-L record, such that the average across the major leagues will be .50000. I do this by taking CTR/(100 + CTR) for each team, then applying a small fudge factor to force the average to .500. In order to maintain some basis for comparison to prior years, I’ve provided the geometric average CTR alongside the arithmetric average aW%, and the equivalent CTR by solving for CTR in the equation:

aW% = CTR/(100 + CTR)*F, where F is the fudge factor (it was 1.0012 for 2016 lest you be concerned there is a massive behind-the-scenes adjustment taking place).

Every AL division was better than every AL division, a contrast from 2015 in which the two worst divisions were the NL East and West, but the NL Central was the best division. Whether you use the geometric or backdoor-arithmetric average CTRs to calculate it, the average AL team’s expected W% versus an average NL team is .545. The easiest SOS in the AL was the Indians, as to be expected as the strongest team in the weakest division; it was still one point higher than that of the toughest NL schedule (the Reds, the weakest team in the strongest division).

I also figure CTRs based on various alternate W% estimates. The first is based on game-Expected W%, which you can read about here. It uses each team’s game-by-game distribution of runs scored and allowed, but treats the two as independent:

Next is Expected W%, that is to say Pythagenpat based on actual runs scored and allowed:

Finally, CTR based on Predicted W% (Pythagenpat based on runs created and allowed, actually Base Runs):

A few seasons ago I started including a CTR version based on actual wins and losses, but including the postseason. I am not crazy about this set of ratings, but I can’t quite articulate why.

On the one hand, adding in the playoffs is a no-brainer. The extra games are additional datapoints regarding team quality. If we have confidence in the rating system (and I won’t hold it against you if you don’t), then the unbalanced nature of the schedule for these additional games shouldn’t be too much of a concern. Yes, you’re playing stronger opponents, but the system understands that and will reward you (or at least not penalize you) for it.

On the other hand, there is a natural tendency among people who analyze baseball statistics to throw out the postseason, due to concerns about unequal opportunity (since most of the league doesn’t participant) and due to historical precedent. Unequal opportunity is a legitimate concern when evaluating individuals--particularly for counting or pseudo-counting metrics like those that use a replacement level baseline--but much less of a concern with teams. Even though the playoff participants may not be the ten most deserving teams by a strict, metric-based definition of “deserving”, there’s no question that teams are largely responsible for their own postseason fate to a much, much greater extent than any individual player is. And the argument from tradition is fine if the issue at hand is the record for team wins or individual home runs or the like, but not particularly applicable when we are simply using the games that have been played as datapoints by which to gauge team quality.

Additionally, the fact that playoff series are not played to their conclusion could be seen as introducing bias. If the Red Sox get swept by the Indians, they not only get three losses added to their ledger, they lose the opportunity to offset that damage. The number of games that are added to a team’s record, even within a playoff round, is directly related to their performance in the very small sample of games.

Suppose that after every month of the regular season, the bottom four teams in the league-wide standings were dropped from the schedule. So after April, the 7-17 Twins record is frozen in place. Do you think this would improve our estimates of team strength? And I don’t just mean from the smaller sample, obviously their record as used in the ratings could be more heavily regressed than teams that played more games. But it would freeze our on-field observations of the Twins, and the overall effect would be to make the dropped teams look worse than their “true” strength.

I doubt that poorly reasoned argument swayed even one person, so the ratings including playoff performance are:

The teams sorted by difference between playoff CTR (pCTR) and regular season CTR (rsCTR):

It’s not uncommon for the pennant winners to be the big gainers, but the Cubs and Indians made a lot of hay this year, as the Cubs managed to pull every other team in the NL Central up one point in the ratings. The Rangers did the reverse with the AL West by getting swept out of the proceedings. They still had a better ranking than the team that knocked them out, as did Washington.

## Tuesday, January 10, 2017

### Hitting by Position, 2016

Of all the annual repeat posts I write, this is the one which most interests me--I have always been fascinated by patterns of offensive production by fielding position, particularly trends over baseball history and cases in which teams have unusual distributions of offense by position. I also contend that offensive positional adjustments, when carefully crafted and appropriately applied, remain a viable and somewhat more objective competitor to the defensive positional adjustments often in use, although this post does not really address those broad philosophical questions.

The first obvious thing to look at is the positional totals for 2016, with the data coming from Baseball-Reference.com. "MLB” is the overall total for MLB, which is not the same as the sum of all the positions here, as pinch-hitters and runners are not included in those. “POS” is the MLB totals minus the pitcher totals, yielding the composite performance by non-pitchers. “PADJ” is the position adjustment, which is the position RG divided by the overall major league average (this is a departure from past posts; I’ll discuss this a little at the end). “LPADJ” is the long-term positional adjustment that I use, based on 2002-2011 data. The rows “79” and “3D” are the combined corner outfield and 1B/DH totals, respectively:

Obviously when looking at a single season of data it’s imperative not to draw any sweeping conclusions. That doesn’t make it any less jarring to see that second basemen outhit every position save the corner infield spots, or that left fielders created runs at the league average rate. The utter collapse of corner outfield offense left them, even pooled, ahead only of catcher and shortstop. Pitchers also added another point of relative RG, marking two years in a row of improvement (such as it is) over their first negative run output in 2014.

It takes historical background to fully appreciate how much the second base and corner outfield performances stack up. 109 for second base is the position’s best showing since 1924, which was 110 thanks largely to Rogers Hornsby, Eddie Collins and Frankie Frisch. Second base had not hit for the league average since 1949. (I should note that the historical figures I’m citing are not directly comparable - they based on each player’s primary position and include all of their PA, regardless of whether they were actually playing the position at the time or not, unlike the Baseball-Reference positional figures used for 2016). Corner outfield was even more extreme at 103, the nadir for the 116 seasons starting with 1901 (the previous low was 107 in 1992).

If the historical perspective is of interest, you may want to check out Corrine Landrey’s article in __The Hardball Time Baseball Annual__. She includes some charts showing OPS+ by position in the DH-era and theorizes that an influx of star young players, still playing on the right-side of the defensive spectrum, has led to the positional shakeup. While I cautioned above about over-generalizing from one year of data, it has been apparent over the last several years that the spread between positions has declined. Landrey’s explanation is as viable as any I’ve seen to explain these season’s results.

Moving on to looking at more granular levels of performance, I always start by looking at the NL pitching staffs and their RAA. I need to stress that the runs created method I’m using here does not take into account sacrifices, which usually is not a big deal but can be significant for pitchers. Note that all team figures from this point forward in the post are park-adjusted. The RAA figures for each position are baselined against the overall major league average RG for the position, except for left field and right field which are pooled.

This is the second consecutive year that the Giants led the league in RAA, and of course they employ the active pitcher most known for his batting. But as usual the spread from top to bottom is in the neighborhood of twenty runs.

I don’t run a full chart of the leading positions since you will very easily be able to go down the list and identify the individual primarily responsible for the team’s performance and you won’t be shocked by any of them, but the teams with the highest RAA at each spot were:

C--WAS, 1B--CIN, 2B--WAS, 3B--TOR, SS--LA, LF--PIT, CF--LAA, RF--BOS, DH--BOS

More interesting are the worst performing positions; the player listed is the one who started the most games at that position for the team:

I am have as little use for batting average as anyone, but I still find the Angels .209 left field average to be the single most entertaining number on that chart (remember, that’s park-adjusted; it was .204 raw). The least entertaining thing for me at least was the Indians’ production at catcher, which was tolerable when Roberto Perez was drawing walks but intolerable when Terry Francona was pinch-running for him in Game 7.

I like to attempt to measure each team’s offensive profile by position relative to a typical profile. I’ve found it frustrating as a fan when my team’s offensive production has come disproportionately from “defensive” positions rather than offensive positions (“Why can’t we just find a corner outfielder who can hit?”) The best way I’ve yet been able to come up with to measure this is to look at the correlation between RG at each position and the long-term positional adjustment. A positive correlation indicates a “traditional” distribution of offense by position--more production from the positions on the right side of the defensive spectrum. (To calculate this, I use the long-term positional adjustments that pool 1B/DH as well as LF/RF, and because of the DH I split it out by league):

As you can see, there are good offenses with high correlations, good offenses with low correlations, and every other combination. I have often used this space to bemoan the Indians continual struggle to get adequate production from first base, contributing to their usual finish in the bottom third or so of correlation. This year, they rank in the middle of the pack, and while it is likely a coincidence that they had a good season, it’s worth noting that Mike Napoli only was average for a first baseman. Even that is much better than some of their previous showings.

Houston’s two best hitting positions (not relative to positional averages, but in terms of RG) were second base and shortstop. In fact the Astros positions in descending order of RG was 4, 6, 9, 2, 5, 3, D, 7, 8. That’s how you get a fairly strong negative correlation between RG and PADJ.

The following charts, broken out by division, display RAA for each position, with teams sorted by the sum of positional RAA. Positions with negative RAA are in red, and positions that are +/-20 RAA are bolded:

Boston had the AL’s most productive outfield, while Toronto was just an average offense after bashing their way to a league leading 118 total RAA in 2015. It remains jarring to see New York at the bottom of an offense list, even just for a division, and their corner infielders were the worst in the majors.

Other than catcher, Cleveland was solid everywhere, with no bold positions--and in this division, that’s enough to lead in RAA and power a cruise to the division title. Detroit had the AL’s top corner infield RAA (no thanks to third base). Kansas City, where to begin with the sweet, sweet schadenfreude? Eksy Magic? No, already covered at length in the leadoff hitters post. Maybe the fact that they had the worst middle infield production in MLB? Or that the bros at the corners chipped in another -19 RAA to also give them the worst infield? The fact that they were dead last in the majors in total RAA? It’s just too much.

The pathetic production of the Los Angeles left fielders was discussed above. The Mike Trout-led center fielders were brilliant, the best single position in the majors. And so, even with a whopping -31 runs from left field, the Angels had the third-most productive outfield in MLB. Houston’s middle infielders, also mentioned above, were the best in the majors. Oakland’s outfield RAA was last in the AL.

Washington overcame the NL’s least productive corner infielders, largely because they had the NL’s most productive middle infielders. Miami had a similar but even more extreme juxtaposition, the NL’s worst infield and the majors’ best outfield, and that with a subpar season from Giancarlo Stanton as right field was the least productive of the three spots. Atlanta had the NL’s worst-hitting middle infield, and Philadelphia the majors’ worst outfield despite Odubel Herrera making a fool of me.

Chicago was tops in the majors in corner infield RAA and total infield RAA. No other teams in this division achieved any superlatives but thanks to Joey Votto and a half-season of Jonathon Lucroy, every team was in the black for total RAA, even if we were to add in Cincinnati’s NL-trailing -9 RAA from pitchers.

No position grouping superlatives in this division, but it feels like more should be said about Corey Seager. It seems like a rookie shortstop hitting as he did, fielding adequately enough to be a serious MVP candidate for a playoff team in a huge market for one of the five or so most venerated franchises should have gotten a lot more attention than it did. Is it the notion that a move to third base is inevitable? Is he, like the superstar down the road, just considered too boring of a personality?

The full spreadsheet is available here.