Some time in early September, the media decided that Josh Donaldson was the AL MVP. I don't purposefully seek out media on the awards, but I've not heard any mainstream support for a non-Donaldson (read: Mike Trout) candidate since that point. Obviously Donaldson has the playoffs and the RBI, but for my money this is not a particularly close race.

Even if you take away park adjustments, which favor Trout to the tune of 7%, I estimate Trout created 124 runs and Donaldson 123. But Trout did that whilst making 26 fewer outs. Third base and center field are essentially a wash when it comes to position adjustments, and the most favorable comparison in the big three fielding metrics for Donaldson is his 11 DRS to Trout's 0 UZR. Bringing park factors back in, I have Trout with 79 RAR and Donaldson 64, leaving Trout ahead even with the most lopsided fielding comparison feasible.

The rest of my AL ballot is pretty straightforward based on the RAR list, with the exceptions of Manny Machado and Lorenzo Cain, who jump up a few spots on the basis of strong showing in fielding (Machado averaged +14 runs in the big three metrics, Cain +17) and baserunning (+3 and +4 after removing steals respectively, per __Baseball Prospectus__). I regress fielding just enough to let Nelson Cruz hang on to what started as a 15 run RAR lead over Machado, sprinkle in the top four pitchers, and wind up with this ballot:

1. CF Mike Trout, LAA

2. 3B Josh Donaldson, TOR

3. SP Dallas Keuchel, HOU

4. SP David Price, DET/TOR

5. RF Nelson Cruz, SEA

6. 3B Manny Machado, BAL

7. SP Sonny Gray, OAK

8. CF Lorenzo Cain, KC

9. SP Corey Kluber, CLE

10. RF Jose Bautista, TOR

In the National League, there's absolutely no question for me: Bryce Harper had an epic season with 96 RAR, and that's before adding his positive baserunning and fielding contributions. For the first time in his full-time career, Mike Trout would not be my choice for overall MLB MVP.

Behind him, five candidates have seperation for the next five spots on the ballots--the top first basemen Joey Votto and Paul Goldschmidt, and the top three starting pitchers (Jake Arrieta, Zack Greinke, and Clayton Kershaw). Looking solely at offense, Votto and Goldschmidt are basically even; while Votto's fielding is seen as above average, Goldschmidt is strong across the board (+13 FRAA, +5 UZR, and +18 DRS) and BP's baserunning metric has him as a positive (+2) while Votto is a big negative (-6).

Without Goldschmidt's strong ancillary contributions, I would drop him behind two or maybe even three of the pitchers, but I think he's got just enough value to stay ahead of them as is (and yes, I did consider that both Greinke with 5 runs created and Arrieta with 2 added value that wasn't considered in the Cy Young post. Greinke's offensive edge made me tempted to flip him and Arrieta on the MVP ballot, but it would have been to generate a curiosity rather than borne of strong conviction).

Two things worth discussing on the rest of the ballot: AJ Pollock would be here with 57 RAR regardless, but his defense and baserunning graded out well (-3 FRAA, +7 UZR, +14 DRS, +5 BP baserunning) while Andrew McCutchen's did not (-16, -5, -8, -2), enough to jump Pollock ahead of McCutchen who led him with 65 RAR.

1. RF Bryce Harper, WAS

2. 1B Paul Goldschmidt, ARI

3. SP Jake Arrieta, CHN

4. SP Zack Greinke, LA

5. 1B Joey Votto, CIN

6. SP Clayton Kershaw, LA

7. C Buster Posey, SF

8. CF AJ Pollock, ARI

9. SP Max Scherzer, WAS

10. CF Andrew McCutchen, PIT

## Monday, November 16, 2015

### Hypothetical Ballot: MVP

## Thursday, November 12, 2015

### Hypothetical Ballot: Cy Young

I think that the Cy Young is the most interesting award to write about from a sabermetric perspective. The MVP debate can be fierce, but it often gets bogged down in semantic arguments about "what is value?" rather than substantive arguments about the candidates' resumes. It seems as if consensus about who is the "best player" is readily found in many years, and then people attempt to construct a narrative by which they can justify ignoring it.

On the other hand, the Cy Young debate is blissfully free from the semantic debate about what the award should represent, and instead discussion can be focused on how one determines the best pitcher. In the nascent days of sabermetrics, this could take the form of a classic ERA v. wins debate. Today, it often is sabermetricians and pseudo-sabermetricians duking it out over which type of performance metric should be used.

The NL race has that potential, while the AL race seems much more straightforward. Dallas Keuchel topped David Price by 12 RAR based on actual runs allowed adjusted for bullpen support. He topped Sonny Gray by 13 RAR and Price by 14 if you look at component statistics (including actual hits allowed). Using a DIPS-like approach, Keuchel was three RAR behind David Price and Corey Kluber. I give the most weight to the first, but unless you go full DIPS, Keuchel pretty clearly offers the best blend. Since Gray only had 35 RAR by DIPS, Price is a clear #2.

The last two spots on my ballot go to Kluber and Chris Archer, edging ahead of Jose Quintana and besting his teammate Chris Sale. Quintana had a slight edge in RAR over Kluber and Archer, but his 4.17 eRA was the worst of any contender and is enough for me to put Kluber and Archer, whose peripherals were stronger than their actual runs allowed, ahead. Sale led the league in dRA at 2.98 thanks to allowing a .331 average on balls in play (his teammate Quintana fared little better at .329), but Kluber and Archer's edge in the non-DIPS metrics is enough to get my vote:

1. Dallas Keuchel, HOU

2. David Price, DET/TOR

3. Sonny Gray, OAK

4. Corey Kluber, CLE

5. Chris Archer, TB

The NL race is a three-way battle between Zack Greinke, Clayton Kershaw, and Jake Arrieta. Greinke has a slight lead in RAR with 88 to Arrieta's 86 and Kershaw's 79. In RAR based on eRA, the two Dodgers are tied with 79 while Arrieta had 85. In dRA (DIPS)-based RAR, Kershaw leads with 72, while Arrieta had 65 and Greinke 48.

In comparing teammates, it becomes more difficult to accept at face value the DIPS position. They pitched in the same park, with the same teammates behind them. That in no way means that the defensive support they received had to have been of equal quality, or that Greinke couldn't have benefitted from random variation on balls in play (this formulation works better than Kershaw being lucky giving that Greinke's BABIP was .235 and Kershaw's .286). The gap in dRA is large, but not large enough for me to wipe out a nine run difference in RAR.

But while Greinke grades out as the Dodger Cy Young, I don't consider his two run lead in RAR over Arrieta significant enough given the latter's edge in the peripherals. While I think Kershaw is the best NL pitcher from a true talent perspective by a significant margin, I think Arrieta is most worthy of the Cy Young.

Max Scherzer is an easy choice for the #4 spot and would probably be in a virtual tie for second with his short-time teammate Price on my AL ballot. The last spot goes to Gerrit Cole over Jacob deGrom and John Lackey; the former was consistently valued by each of the three approaches (51 RAR based on actual runs allowed, 52 based on peripherals and DIPS):

1. Jake Arrieta, CHN

2. Zack Greinke, LA

3. Clayton Kershaw, LA

4. Max Scherzer, WAS

5. Gerrit Cole, PIT

## Monday, November 09, 2015

### Hypothetical Ballot: Rookie of the Year

In the AL, only one rookie reached 500 plate appearances (five did in the NL) and none reached 150 innings pitched (three in the NL), so there is a dearth of full season candidates for Rookie of the Year honors. The only full-time rookie was Billy Burns, and his 20 RAR was good for just fourth among AL rookie hitters. Still, two rookie shortstops managed to stand out and rise above the pack as the clear 1 and 2 choices for the award. Offensively, Carlos Correa and Francisco Lindor had nearly identical production; Lindor's OBA was eleven points higher, Correa's SLG was twenty points higher. In ten more PA, Correa created three more runs, so the two were nearly identical in RG and RAR. Correa's 33 to 31 RAR lead doesn't hold up, though, when fielding and baserunning are brought into the equation. While both were average baserunners according to __Baseball Prospectus__ (0 and -1 runs respectively), Lindor was +2 in FRAA, +11 in UZR, and +10 in DRS while Correa was -3, 0, -6. That's convincing enough to place Lindor ahead on my ballot.

One thing to note is that I think Correa's performance was more impressive than Lindor's in terms of "prospect" status, but I don't think that's what the award is for. Correa is a year younger and his offensive performance was less dependent on a high batting average (Lindor hit .313 with a .249 SEC, Correa hit .279 with a .339 SEC) and Lindor's power output was higher than most expected. But while that matters going forward, I think Lindor was a more valuable player in 2015.

Lance McCullers, Nate Karns, Andrew Heaney, and Carlos Rodon were all candidates for ballot spots from the pitching side. I chose to value Karns' 147 innings over Heaney and Rodon's better peripherals. Miguel Sano was sixth in the AL in RG among players with more than 300 PA (basically equivalent to Edwin Encarnacion and Jose Bautista), but with just 333 PA and questionable value as a fielder or baserunner. So I have it:

1. SS Francisco Lindor, CLE

2. SS Carlos Correa, HOU

3. SP Lance McCullers, HOU

4. DH Miguel Sano, MIN

5. SP Nathan Karns, TB

The NL race is not close, as Kris Bryant put up a 50 RAR season and wasn't panned by the fielding metrics (-2 FRAA, +5 UZR, +3 DRS). Matt Duffy was thirteen runs behind offensively and was seen to be a good fielder, but even using the fielding metrics with no accounting for the additional uncertainty, Bryant would still be ahead. Joc Pederson and Jung Ho Kang are the other top position player candidates with 29 and 28 RAR, but FRAA hates Pederson (-19) while UZR and DRS just dislike him (-4 and -3 respectively). And yes I'm intentionally being silly by suggesting that the metrics like or dislike players. The consensus on Kang was slightly above average, which makes him the clear #3 hitter. Randal Grichuk is in the mix at 26 RAR, and one could certainly make a fielding case to put him ahead of Pederson.

Among pitchers, Noah Syndergaard's 29 RAR bests Anthony DeSclafani's 24, and Thor's peripherals are right in line with his RRA. So I see it as:

1. 3B Kris Bryant, CHN

2. 3B Matt Duffy, SF

3. SS Jung Ho Kang, PIT

4. SP Noah Syndergaard, NYN

5. CF Joc Pederson, LA

## Monday, November 02, 2015

### Royal Mythology

Rarely has the performance of a single team led to so many attempts to rationalize, explain, project virtue, and the like as the 2014-15 Royals. Focusing on the 2015 edition, here are just a handful of Royals myths that I have been particularly annoyed at hearing. The "analysis" that follows is not comprehensive nor is it intended to be. That's kind of the point. The level of extraordinary claims that have been made about the Royals should be apparent even with the crudest of inquiries into the objective record.

**Myth #1: Whatever the Heck Andy McCullough Tweeted**

"The entire point of the Royals is that baseball is a hard game and if you make your opponent do things, sometimes they will screw up"

The Kansas City Royals reached based on error 58 times in 2015. The AL average was 57. In 2014 they had 51 ROE versus a league average of 57.

**Myth #2: The Royals Don't Make Mistakes**

Errors leave a lot to be desired as a metric, but when traditional thinkers talk about making mistakes, errors are first and foremost on their mind. The 2015 Royals had a mFA of .973; the AL average was .971. The 2014 Royals had a mFA of .968; the AL average was .970.

**Myth #3: The Royals had a long World Series drought**

There are 30 MLB teams. It should be obvious, then, that 30 years is the expected time between world titles. Thus a streak of thirty years is not particularly long in theory. It's also not long in practice, as it was only the 12th longest drought (the Mets had the 13th longest drought). Last year en route to the pennant, two of the three teams Kansas City beat had (slightly) longer droughts and the other had a slightly shorter drought.

To find the Royals worthy of any particular sympathy, one must give extra credit for how poorly the franchise performed for much of that period. While this is unfortunate for the fans, it seems like such a group would be less traumatized by losing the World Series and more appreciative just to get there. Fan "suffering" is very low on my list of factors in deciding which teams to pull for in the playoffs, but to the extent I consider it, I tend to side with teams that have been good and just have not had the bounces go their way in October. Teams like the Marlins and the Royals who parlay their only two playoff teams in an extended period into pennants and world titles are quite galling to anyone who has rooted for a titleless yet competent franchise.

But more broadly, I think that the media and fans have yet to understand how championships will be distributed over the long haul in leagues that are double or close to it in size from what they were for so many years. Lengthy droughts, the types that the Red Sox, Cubs, or to a lesser extent Indians and Giants have suffered will be quite commonplace. Basic logic tells you that they have to be.

I did a "simulation" (which is a pretentious way of saying I used the RAND() function in Excel) to simulate 1,000 seasons of a thirty-team league in which each team had a 1/30 chance to win the World Series in any given year. Remember, this is the height of competitive balance. The probability of a championship could not be any more evenly distributed. There are no market disadvantages, no bad franchise stewardship, no billy goats. It is theoretically possible that the timing of championships could be more evenly distributed, but admittedly my imagination is insufficient to describe a specific scenario that would force a more even temporal distribution.

After 1,000 years, the average team should have had 33 1/3 titles. The most successful had 45; the two least successful each had 22 (as an aside, and granting that it was a sixteen team universe for an extended period, think about the Yankees' 27 in this context).

For years 501-1000, I calculated the average of the quartiles, as well as the percentage of active droughts as of a given year greater than 30 years. Since droughts for these 500 years are not independent of one another, be cautious with extrapolating those averages to anything else (for what it's worth, the medians are similar).

The average for these seasons was a first quartile drought of 8.4 years; a median drought of 20.2 years; a third quartile drought of 39.8 years, and a maximum drought of 115.0 years. In the average season, 34.4% of droughts exceeded 30 years (note that the current MLB figure is 12/26 = 46.2% of droughts exceeding 30 years, excluding the four subsequent expansion franchises, which suggests but in no way proves that, not surprisingly, the observed title distribution is not as egalitarian as the theoretical one used here).

Freezing it at year 1,000, this is what the drought picture looks like:

Even with new champions in 7 consecutive and 16 out of 20 seasons, a pretty typical 1/3 of droughts exceed 30 years, one team has exceeded the Cubs, and two more have exceeded the Indians.

The longest drought for any team during the millennium was 215 years. The poor fans of Team 6 celebrated a title in year 306, then went through many generations (or not, who knows, it's the future) before finally winning again in year 622. Then they waited another 120 years for good measure. Should baseball survive for 1,000 years with 30 or more teams, think about all of the narratives that the sportswriters of the future will get to craft.

**Myth #4: The Royals Need to Be Explained**

This is more of a meta-analytical comment than specific to the Royals, but there is an underlying notion, seen even on some sabermetrically-inclined outlets, that the Royals are an anomaly that demands our attention and an explanation. Please note that I am not criticizing the act of questioning ones premises, of attempting to update hypotheses as new data becomes available, of recognizing that we don't know everything about baseball, or anything of the sort. This all laudable. But such inquisition must not be confused with an imperative to find fault in one's null hypotheses either.

But there all too often is a reflexive desire to be too conciliatory, too eager to throw out one's existing knowledge and toolkit in an attempt to explain something that may just be a fluke. Witness "The Year That Base Runs Failed" (an article that demands a thorough undressing that I just do not have the will to give justice to right now). Recently this has seemed to manifest itself more at outlets that rely on 1) boisterous, opinionated writers and 2) daily content production.

When you are boisterous and opinionated, you need your opinions to be right in order to maintain credibility. If you have to blame the tools (Base Runs, W% Estimators, the entirety of sabermetric theory) that you used to justify your initial opinion, that's fair game. On the other hand, my position on the Royals doesn't demand I apologize for it (maybe I should--as I acknowledged above, I could be wrong, and inquiry into why that might be the case is healthy). My position is simply that the Royals were a fairly average team as indicated by their component statistics, but that sometimes teams outplay their component statistics. The Royals made the playoffs and over two seasons went 22-9, but a .500 team would go 22-9 or better with 1.5% probability--it's not likely but it also must happen now and again. You can disagree, but it's inherently a passive argument.

If you need to produce content daily, then you have to write about something, and writing "the sample size precludes us from drawing firm conclusions" over and over again doesn't drive readership. So there's a temptation to overfit your model, to declare that the secret sauce has been found, to cheat on the degree of certainty you require before you declare correlation to be causation, to investigate one positively correlated variable at the expense of other potential explanatory variables, to overreact to a year in which your metric's standard error is higher than it typically is.

Even great sabermetricians can get caught in this trap, and I have never been confused with a great sabermetrician but I have written things along these lines that I am not proud of as well. Bill James and Nate Silver have both, using different but understandable means when considered in the context of their work, failed pretty miserably at predicting playoff success based on historical data. The simple fact of the matter is that there were 32 playoff games (not counting the wildcard games) this season, which is fairly typical. At 30 games/season, you need five seasons to have a sample size the same as that of one major league team-season.

This is particularly problematic when so many of the attempts to explain playoff performance are based on theories about changes in the game. Contact superseding Moneyball, bullpen construction and usage patterns which have been in a constant state of change throughout baseball history...you could never have credible data without the conditions of the game shifting. This is not to say don't try to advance our understanding, it's to say be extremely cautious as you attempt to do so.

So what winds up happening is that a potential explanation ("Contact works, allow it" is a particularly poor paraphrase since it makes it sound like your pitchers should allow contact, but I saw that Colin Cowherd promo to many times not to use it) is honed in on, and maybe there's evidence of some effect, so other potential explanatory variables are ignored and the correlation is exaggerated and soon there's a truism that must be disproved rather than a hypothesis which must be proved.

There's a difference between saying "I don't know" and "No one will ever know". If it seems as if my school of thought arrives at the latter, that's a fair criticism. But I personally would rather be too certain about how much I can't know than to be too quick to think I've learned something new.

## Thursday, October 22, 2015

### End of Season Statistics, 2015

The spreadsheets are published as Google Spreadsheets, which you can download in Excel format by changing the extension in the address from "=html" to "=xls". That way you can download them and manipulate things however you see fit.

The data comes from a number of different sources. Most of the basic data comes from Doug's Stats, which is a very handy site, or Baseball-Reference. KJOK's park database provided some of the data used in the park factors, but for recent seasons park data comes from B-R. Data on inherited/bequeathed runners comes from Baseball Prospectus.

The basic philosophy behind these stats is to use the simplest methods that have acceptable accuracy. Of course, "acceptable" is in the eye of the beholder, namely me. I use Pythagenpat not because other run/win converters, like a constant RPW or a fixed exponent are not accurate enough for this purpose, but because it's mine and it would be kind of odd if I didn't use it.

If I seem to be a stickler for purity in my critiques of others' methods, I'd contend it is usually in a theoretical sense, not an input sense. So when I exclude hit batters, I'm not saying that hit batters are worthless or that they *should* be ignored; it's just easier not to mess with them and not that much less accurate.

I also don't really have a problem with people using sub-standard methods (say, Basic RC) as long as they acknowledge that they are sub-standard. If someone pretends that Basic RC doesn't undervalue walks or cause problems when applied to extreme individuals, I'll call them on it; if they explain its shortcomings but use it regardless, I accept that. Take these last three paragraphs as my acknowledgment that some of the statistics displayed here have shortcomings as well, and I've at least attempted to describe some of them in the discussion below.

The League spreadsheet is pretty straightforward--it includes league totals and averages for a number of categories, most or all of which are explained at appropriate junctures throughout this piece. The advent of interleague play has created two different sets of league totals--one for the offense of league teams and one for the defense of league teams. Before interleague play, these two were identical. I do not present both sets of totals (you can figure the defensive ones yourself from the team spreadsheet, if you desire), just those for the offenses. The exception is for the defense-specific statistics, like innings pitched and quality starts. The figures for those categories in the league report are for the defenses of the league's teams. However, I do include each league's breakdown of basic pitching stats between starters and relievers (denoted by "s" or "r" prefixes), and so summing those will yield the totals from the pitching side. The one abbreviation you might not recognize is "N"--this is the league average of runs/game for one team, and it will pop up again.

The Team spreadsheet focuses on overall team performance--wins, losses, runs scored, runs allowed. The columns included are: Park Factor (PF), Home Run Park Factor (PFhr), Winning Percentage (W%), Expected W% (EW%), Predicted W% (PW%), wins, losses, runs, runs allowed, Runs Created (RC), Runs Created Allowed (RCA), Home Winning Percentage (HW%), Road Winning Percentage (RW%) [exactly what they sound like--W% at home and on the road], Runs/Game (R/G), Runs Allowed/Game (RA/G), Runs Created/Game (RCG), Runs Created Allowed/Game (RCAG), and Runs Per Game (the average number of runs scored an allowed per game). Ideally, I would use outs as the denominator, but for teams, outs and games are so closely related that I don’t think it’s worth the extra effort.

The runs and Runs Created figures are unadjusted, but the per-game averages are park-adjusted, except for RPG which is also raw. Runs Created and Runs Created Allowed are both based on a simple Base Runs formula. The formula is:

A = H + W - HR - CS

B = (2TB - H - 4HR + .05W + 1.5SB)*.76

C = AB - H

D = HR

Naturally, A*B/(B + C) + D.

I have explained the methodology used to figure the PFs before, but the cliff’s notes version is that they are based on five years of data when applicable, include both runs scored and allowed, and they are regressed towards average (PF = 1), with the amount of regression varying based on the number of years of data used. There are factors for both runs and home runs. The initial PF (not shown) is:

iPF = (H*T/(R*(T - 1) + H) + 1)/2

where H = RPG in home games, R = RPG in road games, T = # teams in league (14 for AL and 16 for NL). Then the iPF is converted to the PF by taking x*iPF + (1-x), where x = .6 if one year of data is used, .7 for 2, .8 for 3, and .9 for 4+.

It is important to note, since there always seems to be confusion about this, that these park factors already incorporate the fact that the average player plays 50% on the road and 50% at home. That is what the adding one and dividing by 2 in the iPF is all about. So if I list Fenway Park with a 1.02 PF, that means that it actually increases RPG by 4%.

In the calculation of the PFs, I did not get picky and take out “home” games that were actually at neutral sites.

There are also Team Offense and Defense spreadsheets. These include the following categories:

Team offense: Plate Appearances, Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Walks and Hit Batters per At Bat (WAB), Isolated Power (SLG - BA), R/G at home (hR/G), and R/G on the road (rR/G) BA, OBA, SLG, WAB, and ISO are park-adjusted by dividing by the square root of park factor (or the equivalent; WAB = (OBA - BA)/(1 - OBA), ISO = SLG - BA, and SEC = WAB + ISO).

Team defense: Innings Pitched, BA, OBA, SLG, Innings per Start (IP/S), Starter's eRA (seRA), Reliever's eRA (reRA), Quality Start Percentage (QS%), RA/G at home (hRA/G), RA/G on the road (rRA/G), Battery Mishap Rate (BMR), Modified Fielding Average (mFA), and Defensive Efficiency Record (DER). BA, OBA, and SLG are park-adjusted by dividing by the square root of PF; seRA and reRA are divided by PF.

The three fielding metrics I've included are limited it only to metrics that a) I can calculate myself and b) are based on the basic available data, not specialized PBP data. The three metrics are explained in this post, but here are quick descriptions of each:

1) BMR--wild pitches and passed balls per 100 baserunners = (WP + PB)/(H + W - HR)*100

2) mFA--fielding average removing strikeouts and assists = (PO - K)/(PO - K + E)

3) DER--the Bill James classic, using only the PA-based estimate of plays made. Based on a suggestion by Terpsfan101, I've tweaked the error coefficient. Plays Made = PA - K - H - W - HR - HB - .64E and DER = PM/(PM + H - HR + .64E)

Next are the individual player reports. I defined a starting pitcher as one with 15 or more starts. All other pitchers are eligible to be included as a reliever. If a pitcher has 40 appearances, then they are included. Additionally, if a pitcher has 50 innings and less than 50% of his appearances are starts, he is also included as a reliever (this allows some swingmen type pitchers who wouldn’t meet either the minimum start or appearance standards to get in).

For all of the player reports, ages are based on simply subtracting their year of birth from 2015. I realize that this is not compatible with how ages are usually listed and so “Age 27” doesn’t necessarily correspond to age 27 as I list it, but it makes everything a heckuva lot easier, and I am more interested in comparing the ages of the players to their contemporaries than fitting them into historical studies, and for the former application it makes very little difference. The "R" category records rookie status with a "R" for rookies and a blank for everyone else; I've trusted Baseball Prospectus on this. Also, all players are counted as being on the team with whom they played/pitched (IP or PA as appropriate) the most.

For relievers, the categories listed are: Games, Innings Pitched, estimated Plate Appearances (PA), Run Average (RA), Relief Run Average (RRA), Earned Run Average (ERA), Estimated Run Average (eRA), DIPS Run Average (dRA), Strikeouts per Game (KG), Walks per Game (WG), Guess-Future (G-F), Inherited Runners per Game (IR/G), Batting Average on Balls in Play (%H), Runs Above Average (RAA), and Runs Above Replacement (RAR).

IR/G is per relief appearance (G - GS); it is an interesting thing to look at, I think, in lieu of actual leverage data. You can see which closers come in with runners on base, and which are used nearly exclusively to start innings. Of course, you can’t infer too much; there are bad relievers who come in with a lot of people on base, not because they are being used in high leverage situations, but because they are long men being used in low-leverage situations already out of hand.

For starting pitchers, the columns are: Wins, Losses, Innings Pitched, Estimated Plate Appearances (PA), RA, RRA, ERA, eRA, dRA, KG, WG, G-F, %H, Pitches/Start (P/S), Quality Start Percentage (QS%), RAA, and RAR. RA and ERA you know--R*9/IP or ER*9/IP, park-adjusted by dividing by PF. The formulas for eRA and dRA are based on the same Base Runs equation and they estimate RA, not ERA.

* eRA is based on the actual results allowed by the pitcher (hits, doubles, home runs, walks, strikeouts, etc.). It is park-adjusted by dividing by PF.

* dRA is the classic DIPS-style RA, assuming that the pitcher allows a league average %H, and that his hits in play have a league-average S/D/T split. It is park-adjusted by dividing by PF.

The formula for eRA is:

A = H + W - HR

B = (2*TB - H - 4*HR + .05*W)*.78

C = AB - H = K + (3*IP - K)*x (where x is figured as described below for PA estimation and is typically around .93) = PA (from below) - H - W

eRA = (A*B/(B + C) + HR)*9/IP

To figure dRA, you first need the estimate of PA described below. Then you calculate W, K, and HR per PA (call these %W, %K, and %HR). Percentage of balls in play (BIP%) = 1 - %W - %K - %HR. This is used to calculate the DIPS-friendly estimate of %H (H per PA) as e%H = Lg%H*BIP%.

Now everything has a common denominator of PA, so we can plug into Base Runs:

A = e%H + %W

B = (2*(z*e%H + 4*%HR) - e%H - 5*%HR + .05*%W)*.78

C = 1 - e%H - %W - %HR

cRA = (A*B/(B + C) + %HR)/C*a

z is the league average of total bases per non-HR hit (TB - 4*HR)/(H - HR), and a is the league average of (AB - H) per game.

In the past I presented a couple of batted ball RA estimates. I’ve removed these, not just because batted ball data exhibits questionable reliability but because these metrics were complicated to figure, required me to collate the batted ball data, and were not personally useful to me. I figure these stats for my own enjoyment and have in some form or another going back to 1997. I share them here only because I would do it anyway, so if I’m not interested in certain categories, there’s no reason to keep presenting them.

Instead, I’m showing strikeout and walk rate, both expressed as per game. By game I mean not nine innings but rather the league average of PA/G. I have always been a proponent of using PA and not IP as the denominator for non-run pitching rates, and now the use of per PA rates is widespread. Usually these are expressed as K/PA and W/PA, or equivalently, percentage of PA with a strikeout or walk. I don’t believe that any site publishes these as K and W per equivalent game as I am here. This is not better than K%--it’s simply applying a scalar multiplier. I like it because it generally follows the same scale as the familiar K/9.

To facilitate this, I’ve finally corrected a flaw in the formula I use to estimate plate appearances for pitchers. Previously, I’ve done it the lazy way by not splitting strikeouts out from other outs. I am now using this formula to estimate PA (where PA = AB + W):

PA = K + (3*IP - K)*x + H + W

Where x = league average of (AB - H - K)/(3*IP - K)

Then KG = K*Lg(PA/G) and WG = W*Lg(PA/G).

G-F is a junk stat, included here out of habit because I've been including it for years. It was intended to give a quick read of a pitcher's expected performance in the next season, based on eRA and strikeout rate. Although the numbers vaguely resemble RAs, it's actually unitless. As a rule of thumb, anything under four is pretty good for a starter. G-F = 4.46 + .095(eRA) - .113(K*9/IP). It is a junk stat. JUNK STAT JUNK STAT JUNK STAT. Got it?

%H is BABIP, more or less--%H = (H - HR)/(PA - HR - K - W), where PA was estimated above. Pitches/Start includes all appearances, so I've counted relief appearances as one-half of a start (P/S = Pitches/(.5*G + .5*GS). QS% is just QS/(G - GS); I don't think it's particularly useful, but Doug's Stats include QS so I include it.

I've used a stat called Relief Run Average (RRA) in the past, based on Sky Andrecheck's article in the August 1999 By the Numbers; that one only used inherited runners, but I've revised it to include bequeathed runners as well, making it equally applicable to starters and relievers. I use RRA as the building block for baselined value estimates for all pitchers. I explained RRA in this article, but the bottom line formulas are:

BRSV = BRS - BR*i*sqrt(PF)

IRSV = IR*i*sqrt(PF) - IRS

RRA = ((R - (BRSV + IRSV))*9/IP)/PF

The two baselined stats are Runs Above Average (RAA) and Runs Above Replacement (RAR). This year I've revised RAA to use a slightly different baseline for starters and relievers as described here. The adjustment is based on patterns from the last several seasons of league average starter and reliever eRA. Thus it does not adjust for any advantages relief pitchers enjoy that are not reflected in their component statistics. This could include runs allowed scoring rules that benefit relievers (although the use of RRA should help even the scales in this regard, at least compared to raw RA) and the talent advantage of starting pitchers. The RAR baselines do attempt to take the latter into account, and so the difference in starter and reliever RAR will be more stark than the difference in RAA.

RAA (relievers) = (.951*LgRA - RRA)*IP/9

RAA (starters) = (1.025*LgRA - RRA)*IP/9

RAR (relievers) = (1.11*LgRA - RRA)*IP/9

RAR (starters) = (1.28*LgRA - RRA)*IP/9

All players with 250 or more plate appearances (official, total plate appearances) are included in the Hitters spreadsheets (along with some players close to the cutoff point who I was interested in). Each is assigned one position, the one at which they appeared in the most games. The statistics presented are: Games played (G), Plate Appearances (PA), Outs (O), Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Runs Created (RC), Runs Created per Game (RG), Speed Score (SS), Hitting Runs Above Average (HRAA), Runs Above Average (RAA), Hitting Runs Above Replacement (HRAR), and Runs Above Replacement (RAR).

Starting in 2015, I'm including hit batters in all related categories for hitters, so PA is now equal to AB + W+ HB. Outs are AB - H + CS. BA and SLG you know, but remember that without SF, OBA is just (H + W + HB)/(AB + W + HB). Secondary Average = (TB - H + W + HB)/AB = SLG - BA + (OBA - BA)/(1 - OBA). I have not included net steals as many people (and Bill James himself) do, but I have included HB which some do not.

BA, OBA, and SLG are park-adjusted by dividing by the square root of PF. This is an approximation, of course, but I'm satisfied that it works well (I plan to post a couple articles on this some time during the offseason). The goal here is to adjust for the win value of offensive events, not to quantify the exact park effect on the given rate. I use the BA/OBA/SLG-based formula to figure SEC, so it is park-adjusted as well.

Runs Created is actually Paul Johnson's ERP, more or less. Ideally, I would use a custom linear weights formula for the given league, but ERP is just so darn simple and close to the mark that it’s hard to pass up. I still use the term “RC” partially as a homage to Bill James (seriously, I really like and respect him even if I’ve said negative things about RC and Win Shares), and also because it is just a good term. I like the thought put in your head when you hear “creating” a run better than “producing”, “manufacturing”, “generating”, etc. to say nothing of names like “equivalent” or “extrapolated” runs. None of that is said to put down the creators of those methods--there just aren’t a lot of good, unique names available.

For 2015, I've refined the formula a little bit (see this post for more) to:

1. include hit batters at a value equal to that of a walk

2. value intentional walks at just half the value of a regular walk

3. recalibrate the multiplier based on the last ten major league seasons (2005-2014)

This revised RC = (TB + .8H + W + HB - .5IW + .7SB - CS - .3AB)*.310

RC is park adjusted by dividing by PF, making all of the value stats that follow park adjusted as well. RG, the Runs Created per Game rate, is RC/O*25.5. I do not believe that outs are the proper denominator for an individual rate stat, but I also do not believe that the distortions caused are that bad. (I still intend to finish my rate stat series and discuss all of the options in excruciating detail, but alas you’ll have to take my word for it now).

Several years ago I switched from using my own "Speed Unit" to a version of Bill James' Speed Score; of course, Speed Unit was inspired by Speed Score. I only use four of James' categories in figuring Speed Score. I actually like the construct of Speed Unit better as it was based on z-scores in the various categories (and amazingly a couple other sabermetricians did as well), but trying to keep the estimates of standard deviation for each of the categories appropriate was more trouble than it was worth.

Speed Score is the average of four components, which I'll call a, b, c, and d:

a = ((SB + 3)/(SB + CS + 7) - .4)*20

b = sqrt((SB + CS)/(S + W))*14.3

c = ((R - HR)/(H + W - HR) - .1)*25

d = T/(AB - HR - K)*450

James actually uses a sliding scale for the triples component, but it strikes me as needlessly complex and so I've streamlined it. He looks at two years of data, which makes sense for a gauge that is attempting to capture talent and not performance, but using multiple years of data would be contradictory to the guiding principles behind this set of reports (namely, simplicity. Or laziness. You're pick.) I also changed some of his division to mathematically equivalent multiplications.

There are a whopping four categories that compare to a baseline; two for average, two for replacement. Hitting RAA compares to a league average hitter; it is in the vein of Pete Palmer’s Batting Runs. RAA compares to an average hitter at the player’s primary position. Hitting RAR compares to a “replacement level” hitter; RAR compares to a replacement level hitter at the player’s primary position. The formulas are:

HRAA = (RG - N)*O/25.5

RAA = (RG - N*PADJ)*O/25.5

HRAR = (RG - .73*N)*O/25.5

RAR = (RG - .73*N*PADJ)*O/25.5

PADJ is the position adjustment, and it is based on 2002-2011 offensive data. For catchers it is .89; for 1B/DH, 1.17; for 2B, .97; for 3B, 1.03; for SS, .93; for LF/RF, 1.13; and for CF, 1.02. I had been using the 1992-2001 data as a basis for some time, but finally updated for 2012. I’m a little hesitant about this update, as the middle infield positions are the biggest movers (higher positional adjustments, meaning less positional credit). I have no qualms for second base, but the shortstop PADJ is out of line with the other position adjustments widely in use and feels a bit high to me. But there are some decent points to be made in favor of offensive adjustments, and I’ll have a bit more on this topic in general below.

That was the mechanics of the calculations; now I'll twist myself into knots trying to justify them. If you only care about the how and not the why, stop reading now.

The first thing that should be covered is the philosophical position behind the statistics posted here. They fall on the continuum of ability and value in what I have called "performance". Performance is a technical-sounding way of saying "Whatever arbitrary combination of ability and value I prefer".

With respect to park adjustments, I am not interested in how any particular player is affected, so there is no separate adjustment for lefties and righties for instance. The park factor is an attempt to determine how the park affects run scoring rates, and thus the win value of runs.

I apply the park factor directly to the player's statistics, but it could also be applied to the league context. The advantage to doing it my way is that it allows you to compare the component statistics (like Runs Created or OBA) on a park-adjusted basis. The drawback is that it creates a new theoretical universe, one in which all parks are equal, rather than leaving the player grounded in the actual context in which he played and evaluating how that context (and not the player's statistics) was altered by the park.

The good news is that the two approaches are essentially equivalent; in fact, they are precisely equivalent if you assume that the Runs Per Win factor is equal to the RPG. Suppose that we have a player in an extreme park (PF = 1.15, approximately like Coors Field pre-humidor) who has an 8 RG before adjusting for park, while making 350 outs in a 4.5 N league. The first method of park adjustment, the one I use, converts his value into a neutral park, so his RG is now 8/1.15 = 6.957. We can now compare him directly to the league average:

RAA = (6.957 - 4.5)*350/25.5 = +33.72

The second method would be to adjust the league context. If N = 4.5, then the average player in this park will create 4.5*1.15 = 5.175 runs. Now, to figure RAA, we can use the unadjusted RG of 8:

RAA = (8 - 5.175)*350/25.5 = +38.77

These are not the same, as you can obviously see. The reason for this is that they take place in two different contexts. The first figure is in a 9 RPG (2*4.5) context; the second figure is in a 10.35 RPG (2*4.5*1.15) context. Runs have different values in different contexts; that is why we have RPW converters in the first place. If we convert to WAA (using RPW = RPG, which is only an approximation, so it's usually not as tidy as it appears below), then we have:

WAA = 33.72/9 = +3.75

WAA = 38.77/10.35 = +3.75

Once you convert to wins, the two approaches are equivalent. The other nice thing about the first approach is that once you park-adjust, everyone in the league is in the same context, and you can dispense with the need for converting to wins at all. You still might want to convert to wins, and you'll need to do so if you are comparing the 2015 players to players from other league-seasons (including between the AL and NL in the same year), but if you are only looking to compare Jose Bautista to Miguel Cabrera, it's not necessary. WAR is somewhat ubiquitous now, but personally I prefer runs when possible--why mess with decimal points if you don't have to?

The park factors used to adjust player stats here are run-based. Thus, they make no effort to project what a player "would have done" in a neutral park, or account for the difference effects parks have on specific events (walks, home runs, BA) or types of players. They simply account for the difference in run environment that is caused by the park (as best I can measure it). As such, they don't evaluate a player within the actual run context of his team's games; they attempt to restate the player's performance as an equivalent performance in a neutral park.

I suppose I should also justify the use of sqrt(PF) for adjusting component statistics. The classic defense given for this approach relies on basic Runs Created--runs are proportional to OBA*SLG, and OBA*SLG/PF = OBA/sqrt(PF)*SLG/sqrt(PF). While RC may be an antiquated tool, you will find that the square root adjustment is fairly compatible with linear weights or Base Runs as well. I am not going to take the space to demonstrate this claim here, but I will some time in the future.

Many value figures published around the sabersphere adjust for the difference in quality level between the AL and NL. I don't, but this is a thorny area where there is no right or wrong answer as far as I'm concerned. I also do not make an adjustment in the league averages for the fact that the overall NL averages include pitcher batting and the AL does not (not quite true in the era of interleague play, but you get my drift).

The difference between the leagues may not be precisely calculable, and it certainly is not constant, but it is real. If the average player in the AL is better than the average player in the NL, it is perfectly reasonable to expect the average AL player to have more RAR than the average NL player, and that will not happen without some type of adjustment. On the other hand, if you are only interested in evaluating a player relative to his own league, such an adjustment is not necessarily welcome.

The league argument only applies cleanly to metrics baselined to average. Since replacement level compares the given player to a theoretical player that can be acquired on the cheap, the same pool of potential replacement players should by definition be available to the teams of each league. One could argue that if the two leagues don't have equal talent at the major league level, they might not have equal access to replacement level talent--except such an argument is at odds with the notion that replacement level represents talent that is truly "freely available".

So it's hard to justify the approach I take, which is to set replacement level relative to the average runs scored in each league, with no adjustment for the difference in the leagues. The best justification is that it's simple and it treats each league as its own universe, even if in reality they are connected.

The replacement levels I have used here are very much in line with the values used by other sabermetricians. This is based both on my own "research", my interpretation of other's people research, and a desire to not stray from consensus and make the values unhelpful to the majority of people who may encounter them.

Replacement level is certainly not settled science. There is always going to be room to disagree on what the baseline should be. Even if you agree it should be "replacement level", any estimate of where it should be set is just that--an estimate. Average is clean and fairly straightforward, even if its utility is questionable; replacement level is inherently messy. So I offer the average baseline as well.

For position players, replacement level is set at 73% of the positional average RG (since there's a history of discussing replacement level in terms of winning percentages, this is roughly equivalent to .350). For starting pitchers, it is set at 128% of the league average RA (.380), and for relievers it is set at 111% (.450).

I am still using an analytical structure that makes the comparison to replacement level for a position player by applying it to his hitting statistics. This is the approach taken by Keith Woolner in VORP (and some other earlier replacement level implementations), but the newer metrics (among them Rally and Fangraphs' WAR) handle replacement level by subtracting a set number of runs from the player's total runs above average in a number of different areas (batting, fielding, baserunning, positional value, etc.), which for lack of a better term I will call the subtraction approach.

The offensive positional adjustment makes the inherent assumption that the average player at each position is equally valuable. I think that this is close to being true, but it is not quite true. The ideal approach would be to use a defensive positional adjustment, since the real difference between a first baseman and a shortstop is their defensive value. When you bat, all runs count the same, whether you create them as a first baseman or as a shortstop.

That being said, using "replacement hitter at position" does not cause too many distortions. It is not theoretically correct, but it is practically powerful. For one thing, most players, even those at key defensive positions, are chosen first and foremost for their offense. Empirical research by Keith Woolner has shown that the replacement level hitting performance is about the same for every position, relative to the positional average.

Figuring what the defensive positional adjustment should be, though, is easier said than done. Therefore, I use the offensive positional adjustment. So if you want to criticize that choice, or criticize the numbers that result, be my guest. But do not claim that I am holding this up as the correct analytical structure. I am holding it up as the most simple and straightforward structure that conforms to reality reasonably well, and because while the numbers may be flawed, they are at least based on an objective formula that I can figure myself. If you feel comfortable with some other assumptions, please feel free to ignore mine.

That still does not justify the use of HRAR--hitting runs above replacement--which compares each hitter, regardless of position, to 73% of the league average. Basically, this is just a way to give an overall measure of offensive production without regard for position with a low baseline. It doesn't have any real baseball meaning.

A player who creates runs at 90% of the league average could be above-average (if he's a shortstop or catcher, or a great fielder at a less important fielding position), or sub-replacement level (DHs that create 3.5 runs per game are not valuable properties). Every player is chosen because his total value, both hitting and fielding, is sufficient to justify his inclusion on the team. HRAR fails even if you try to justify it with a thought experiment about a world in which defense doesn't matter, because in that case the absolute replacement level (in terms of RG, without accounting for the league average) would be much higher than it is currently.

The specific positional adjustments I use are based on 2002-2011 data. I stick with them because I have not seen compelling evidence of a change in the degree of difficulty or scarcity between the positions between now and then, and because I think they are fairly reasonable. The positions for which they diverge the most from the defensive position adjustments in common use are 2B, 3B, and CF. Second base is considered a premium position by the offensive PADJ (.97), while third base and center field have similar adjustments in the opposite direction (1.03 and 1.02).

Another flaw is that the PADJ is applied to the overall league average RG, which is artificially low for the NL because of pitcher's batting. When using the actual league average runs/game, it's tough to just remove pitchers--any adjustment would be an estimate. If you use the league total of runs created instead, it is a much easier fix.

One other note on this topic is that since the offensive PADJ is a stand-in for average defensive value by position, ideally it would be applied by tying it to defensive playing time. I have done it by outs, though.

The reason I have taken this flawed path is because 1) it ties the position adjustment directly into the RAR formula rather than leaving it as something to subtract on the outside and more importantly 2) there’s no straightforward way to do it. The best would be to use defensive innings--set the full-time player to X defensive innings, figure how Derek Jeter’s innings compared to X, and adjust his PADJ accordingly. Games in the field or games played are dicey because they can cause distortion for defensive replacements. Plate Appearances avoid the problem that outs have of being highly related to player quality, but they still carry the illogic of basing it on offensive playing time. And of course the differences here are going to be fairly small (a few runs). That is not to say that this way is preferable, but it’s not horrible either, at least as far as I can tell.

To compare this approach to the subtraction approach, start by assuming that a replacement level shortstop would create .86*.73*4.5 = 2.825 RG (or would perform at an overall level of equivalent value to being an average fielder at shortstop while creating 2.825 runs per game). Suppose that we are comparing two shortstops, each of whom compiled 600 PA and played an equal number of defensive games and innings (and thus would have the same positional adjustment using the subtraction approach). Alpha made 380 outs and Bravo made 410 outs, and each ranked as dead-on average in the field.

The difference in overall RAR between the two using the subtraction approach would be equal to the difference between their offensive RAA compared to the league average. Assuming the league average is 4.5 runs, and that both Alpha and Bravo created 75 runs, their offensive RAAs are:

Alpha = (75*25.5/380 - 4.5)*380/25.5 = +7.94

Similarly, Bravo is at +2.65, and so the difference between them will be 5.29 RAR.

Using the flawed approach, Alpha's RAR will be:

(75*25.5/380 - 4.5*.73*.86)*380/25.5 = +32.90

Bravo's RAR will be +29.58, a difference of 3.32 RAR, which is two runs off of the difference using the subtraction approach.

The downside to using PA is that you really need to consider park effects if you do, whereas outs allow you to sidestep park effects. Outs are constant; plate appearances are linked to OBA. Thus, they not only depend on the offensive context (including park factor), but also on the quality of one's team. Of course, attempting to adjust for team PA differences opens a huge can of worms which is not really relevant; for now, the point is that using outs for individual players causes distortions, sometimes trivial and sometimes bothersome, but almost always makes one's life easier.

I do not include fielding (or baserunning outside of steals, although that is a trivial consideration in comparison) in the RAR figures--they cover offense and positional value only). This in no way means that I do not believe that fielding is an important consideration in player evaluation. However, two of the key principles of these stat reports are 1) not incorporating any data that is not readily available and 2) not simply including other people's results (of course I borrow heavily from other people's methods, but only adapting methodology that I can apply myself).

Any fielding metric worth its salt will fail to meet either criterion--they use zone data or play-by-play data which I do not have easy access to. I do not have a fielding metric that I have stapled together myself, and so I would have to simply lift other analysts' figures.

Setting the practical reason for not including fielding aside, I do have some reservations about lumping fielding and hitting value together in one number because of the obvious differences in reliability between offensive and fielding metrics. In theory, they absolutely should be put together. But in practice, I believe it would be better to regress the fielding metric to a point at which it would be roughly equivalent in reliability to the offensive metric.

Offensive metrics have error bars associated with them, too, of course, and in evaluating a single season's value, I don't care about the vagaries that we often lump together as "luck". Still, there are errors in our assessment of linear weight values and players that collect an unusual proportion of infield hits or hits to the left side, errors in estimation of park factor, and any number of other factors that make their events more or less valuable than an average event of that type.

Fielding metrics offer up all of that and more, as we cannot be nearly as certain of true successes and failures as we are when analyzing offense. Recent investigations, particularly by Colin Wyers, have raised even more questions about the level of uncertainty. So, even if I was including a fielding value, my approach would be to assume that the offensive value was 100% reliable (which it isn't), and regress the fielding metric relative to that (so if the offensive metric was actually 70% reliable, and the fielding metric 40% reliable, I'd treat the fielding metric as .4/.7 = 57% reliable when tacking it on, to illustrate with a simplified and completely made up example presuming that one could have a precise estimate of nebulous "reliability").

Given the inherent assumption of the offensive PADJ that all positions are equally valuable, once RAR has been figured for a player, fielding value can be accounted for by adding on his runs above average relative to a player at his own position. If there is a shortstop that is -2 runs defensively versus an average shortstop, he is without a doubt a plus defensive player, and a more valuable defensive player than a first baseman who was +1 run better than an average first baseman. Regardless, since it was implicitly assumed that they are both average defensively for their position when RAR was calculated, the shortstop will see his value docked two runs. This DOES NOT MEAN that the shortstop has been penalized for his defense. The whole process of accounting for positional differences, going from hitting RAR to positional RAR, has benefited him.

I've found that there is often confusion about the treatment of first baseman and designated hitters in my PADJ methodology, since I consider DHs as in the same pool as first baseman. The fact of the matter is that first baseman outhit DH. There are any number of potential explanations for this; DHs are often old or injured, players hit worse when DHing than they do when playing the field, etc. This actually helps first baseman, since the DHs drag the average production of the pool down, thus resulting in a lower replacement level than I would get if I considered first baseman alone.

However, this method does assume that a 1B and a DH have equal defensive value. Obviously, a DH has no defensive value. What I advocate to correct this is to treat a DH as a bad defensive first baseman, and thus knock another five or so runs off of his RAR for a full-time player. I do not incorporate this into the published numbers, but you should keep it in mind. However, there is no need to adjust the figures for first baseman upwards --the only necessary adjustment is to take the DHs down a notch.

Finally, I consider each player at his primary defensive position (defined as where he appears in the most games), and do not weight the PADJ by playing time. This does shortchange a player like Ben Zobrist (who saw significant time at a tougher position than his primary position), and unduly boost a player like Buster Posey (who logged a lot of games at a much easier position than his primary position). For most players, though, it doesn't matter much. I find it preferable to make manual adjustments for the unusual cases rather than add another layer of complexity to the whole endeavor.

2015 League

2015 Park Factors

2015 Teams

2015 Team Offense

2015 Team Defense

2015 AL Relievers

2015 NL Relievers

2015 AL Starters

2015 NL Starters

2015 AL Hitters

2015 NL Hitters

## Monday, October 05, 2015

### Crude Playoff Odds--2015

These are very simple playoff odds, based on my crude rating system for teams using an equal mix of W%, EW% (based on R/RA), PW% (based on RC/RCA), and 69 games of .500. They account for home field advantage by assuming a .500 team wins 54.5% of home games. They assume that a team's inherent strength is constant from game-to-game. They do not account for any number of factors that you would actually want to account for if you were serious about this, including but not limited to injuries, the current construction of the team rather than the aggregate seasonal performance, pitching rotations, estimated true talent of the players, and whatever Ned Yost sold to the devil.

These are the ratings that fuel the odds (CTR(H) is the team's rating with home field advantage included):

You'll note that the ratings love Houston much more than anyone else. thanks to their excellent EW% and PW%. The league disparity remains strong, with the average AL team (not playoff team) having a 107 rating with 100 being average. To the extent the ratings look odd, that is probably the largest driver; another is the fact that the NL East and West were the weakest divisions in MLB, so LA's SOS ranks 28th and NYN's ranks dead last. The Mets' average opponent is considered to be about equal to the White Sox; the Yankees' (who played the toughest schedule of any playoff team) is considered to be about equal to the...wait for it...Mets (actually, better than the Mets, 106 to 104).

The wildcard odds:

Very even matchups that actually slightly favor the visiting teams (remember, home team is assumed to win 54.5% if equal).

In the charts that follow, “P” is the probability that the series occurs; P(H win) is the probability that the home team wins should the series occur; and P(H) is the probability that the series occurs and that the home team wins [P*P(H win)].

LDS:

The series I was most interested in without looking at any numbers was NYN/LA; while it still ranks as pretty competitive, it's the least competitive on paper other than TEX/TOR. Surprisingly (but thankfully), KC should have their hands full every step of the way, although obviously wildcards burning off their pitchers is not accounted for here.

LCS:

The home team is favored in every LDS matchup, but that is decisively not the case here, with seven of the twelve possible LCS matchups featuring road favorites. This is due to the NL's top teams all hailing from the Central as the winner of STL/wildcard will be favored in any LCS matchup, along with TEX being considered the AL's weakest playoff participant but having home field should the wildcard knockoff KC.

World Series:

The NL is favored in just four of twenty-five possible matchups, namely those that feature the Rangers against not-the-Mets. STL is considered stronger than KC or NYA by the ratings, but home field advantage tips the odds to the junior circuit.

Putting it all together:

There's a 75% chance the World Series will be sufferable, which I think is decent. The AL has a 56.4% chance to win.

I lost a lot of Twitter followers last year by griping about the playoffs, particularly the Royals. So be it. The 2014 playoffs were the least enjoyable of my time as a fan. One factor was that the series weren't very good--there were a lot of sweeps and the like, until a terrific World Series. It wasn't extraordinarily bad in that respect, but there were fewer games than usual. There were a lot of very good individual games (at least until the World Series, which had an epic game seven that has washed away how blah most of it was out people's memories), but I'd prefer a better balance of series and game drama.

But what really irked me about the 2014 playoffs is how predictable they were. Many people praised the playoffs for their unpredictability, but I contend that in retrospect they were quite predictable in retrospect--the better team lost. Obviously the very existence of playoffs allows for the regular season results to be voided by short series results; too much so for my liking, as in my ideal world there would be four or two playoff teams (i.e. either 1969-1993 or pre-1969 format). The trend of history in every American sport is inexorably to further expand the playoff field.

I am fully aware of both of those facts--that short series often result in the lesser team winning, and that the playoffs are never going to be reduced in size. Accepting that reality does not, however, compel me to enjoy that reality, and I did not enjoy watching the better teams get beaten as a matter of course last October. I also find the style, attitude, and fan/media entitlement of the Royals to be insufferable, and thus I was particularly perturbed by the results.

At the risk of beating that dead horse, further points in my defense of my hatred of the 2014 postseason:

1. The better team almost always losing should not be what people mean when they say they like unpredictability in the playoff outcomes. That would involve the better team sometimes winning. It became very easy to predict (not with any assurance or confidence of accuracy in the moment, of course; I'm not suggesting the universe inevitably set the better teams to be defeated) the 2014 playoffs in short order after the horrid first set of games that saw three of four series go 2-0.

2. The Royals simply don't play the style of baseball I like. That's just my preference, not a sabermetric imperative or anything weighty like that, but I personally do not enjoy watching low secondary average baseball. Of course, the Royals started hitting home runs in the playoffs, so they were winning in large part due to the inverse of the narrative they were used to advance.

3. I don't begrudge anyone rooting for their team. Certainly I root for the Indians unequivocally when they are in the playoffs whether they are the best team or not. But it became really tiresome to hear about the Royals long playoff drought. I certainly do not believe that franchises are owed success thanks to fallow periods, but if I did, I would have started the list of worthy playoff participants elsewhere. After all, KC's drought was simply that they had gone thirty years without making the playoffs at all. They made the playoffs, they advanced from the wildcard round, they were in the playoffs. Why did that drought entitle their fans to more? The Tigers, Orioles, and Pirates all had longer World Series title droughts than the Royals. If people choose who to root for based on past suffering, the line should start there once you've advanced to the playoffs. There are a lot of fans of a lot of teams who have seen their teams make the playoffs multiple times and break their hearts once there, and they're supposed to pull for KC to win in their first shot? And while a thirty year playoff drought is excessive, a little bit of simple logic should tell you that given that there are thirty teams, thirty year world title droughts are not going to be even remotely remarkable in the future.

Hopefully this will be the last time I have occasion to rant about the 2014-15 Royals. May the Astros or Yankees do to them as they did to the Angels.

## Tuesday, September 01, 2015

### End of Season Stats Update

The end of season statistics I post have always walked a fine line between exhibiting a reasonable balance of accuracy and simplicity and veering to far into being needlessly inaccurate. That fundamental tension will not be removed with the changes I am making, but they will clean up a couple shortcuts that were too glaring for me to ignore any longer. I'm writing these changes up about a month in advance so that I don't have to devote any more space to them in the already bloated post that explains all the stats.

**Runs Created**

I've modified the knockoff version of Paul Johnson's Estimated Runs Produced that I use to calculate runs created to consider intentional walks, hit batters, and strikeouts. The idea behind using ERP, which is what I refer to as a "skeleton" form of linear weights, rather than using some other construct, is that I don't want to recalculate each and every weight every season. Instead, using fixed relationships between the various offensive events that hold fairly well across average modern major league environments is easy to work with from year-to-year and avoids giving the appearance of customization that explicit weights for each event would convey. Mind you, such a formula can still be expressed as x*S + y*D + z*T + ...

The previous version I was using was:

(TB + .8H + W + .7SB - CS - .3AB)*x, where x is around .322

To this formula I need to add IW and HB. This is all fairly straightforward--hit batters can count the same as walks, intentional walks will be half of a standard walk:

TB + .8H + W + HB - .5IW + .7SB - CS - .3AB

The rationale for counting intentional walks as half of a standard walk is that it is fairly close to correct (Tom Ruane's work suggests the ratio of IW value to standard walk value was .57 for 1960-2004). There are other possible approaches, such as removing IW altogether but assigning them the value of the batter's average plate appearance. There is certainly logic behind such a method; just doing it the simple way is a bit more conservative in terms of recognizing differences between batters.

Hit batters are actually slightly more valuable on average than walks due to the more random situations in which they occur, but such a distinction would be overkill given the approximate nature of the other coefficients.

I considered making adjustments for the other events included in the standard statistics (strikeouts, sacrifice hits, sacrifice flies, double plays) but ultimately chose to forego including each. The difference between a strikeout and a non-strikeout out is around .01 runs; given that there are numerous shortcuts already being taken, this is simply not enough for me to worry about in this context. Sacrifices and double plays are problematic due to the heavy contextual influences, although I came very close to just counting sacrifice flies the same as any other out. I would include K, SH, and SF if I was trying to do this precisely, but I would still leave double plays alone.

This was also a good opportunity to update the multipliers for all versions of the skeleton, which I did with the 2005-2014 major league totals to get these formulas:

ERP = (TB + .8H + W - .3AB)*.319 (had been using .324)

= .478S + .796D + 1.115T + 1.434HR + .319W - .096(AB - H)

ERP = (TB + .8H + W + .7SB - CS - .3AB)*.314 (had been using .322)

= .471S + .786D + 1.100T + 1.414HR + .314W + .220SB - .314CS - .094(AB - H)

ERP = (TB + .8H + W + HB - .5IW + .7SB - CS - .3AB)*.310

= .464S + .774D + 1.084T + 1.393HR + .310(W - IW + HB) + .155IW + .217SB - .310CS - .093(AB - H)

The expanded versions illustrate one of the weaknesses of the skeleton approach, or perhaps more precisely using total bases and hits rather than splitting out the hit types, as it results in the relationship between hit types being a bit off, particularly in the case of the triple. Still, I find the accuracy tradeoff acceptable for the purposes for which I use the end of season statistics.

For the batters who appeared in the 2014 end of season statistics, the biggest change switching to the version including HB and IW was five runs. Jon Jay, Carlos Gomez, and Mike Zunino gain five runs while Victor Martinez loses five runs. Of the 312 hitters, 263 (84%) change by no more than a run in either direction. So the differences are usually not material, another reason why I personally didn't mind the inaccuracy. But Carlos Gomez might disagree.

Along with the RC change are some necessary changes to other statistics. PA is now defined as AB + W + HB. OBA is (H + W + HB)/(AB + W + HB), and Secondary Average is (TB - H + W + HB)/AB, which is equal to SLG - BA + (OBA - BA)/(1 - OBA).

**RAA for Pitchers**

For as long as I've been running these reports, I've used the league run average as the baseline for Runs Above Average for both starters and relievers. This despite using very different replacement levels (128% of league average for starters and 111% for relievers). I've rationalized this somewhere, I'm sure, but the fundamental flaw is apparent when you look at my reliever reports and see three or four run gaps between RAA and RAR for many pitchers.

I want to avoid using the actual league average split for any given season, since it can bounce around and I'd rather use the league overall average in some manner. So my approach instead will be to look at the starter/reliever split in eRA (Run Average estimated based on component statistics, including actual hits allowed, so akin to Component ERA rather than FIP) for the last five league-seasons and see what makes sense.

The resulting difference in baseline between starters and relievers will not be as large as that exhibited in the replacement levels. The replacement level split attempts to estimate the true talent difference between the two roles, recognizing that most relievers would not be anywhere near as effective in a starting role. This adjustment is simply trying to compare an individual pitcher to what the composite league average pitcher would be estimated to have in his role (SP/RP) and does not account for our belief that the average starter is a better pitcher than the average reliever.

Additionally, using eRA rather than actual RA makes the adjustment more conservative than it otherwise might be, because it considers component performance rather than actual runs allowed. Part of the reliever advantage in RA is that the scoring rules benefit them. Why did I not then take this into account? I actually don't use RA in calculating pitcher RAA or RAR, I use a version of Relief RA, which was created by Sky Andrecheck and makes an adjustment for inherited runners (a simple one that doesn't consider base/out state, simply the percentage that score). The version I use considers bequeathed runners as well, so as to adjust starter's run averages for bullpen support as well. But the statistics on inherited and bequeathed runners by role for the league are not readily available, so I based the adjustment on eRA, which I already have calculated for each league-season broken out for starters and relievers.

This chart should be fairly self-explanatory: seRA is starter eRA, reRA is reliever eRA, eRA is the league eRA, s ratio = seRA/Lg(eRA), r ratio = reRA/Lg(eRA), and S IP% is the percentage of league innings thrown by starters. The relationships are fairly stable for the last five years, and so I have just used the simple average of the league-season s and r ratios to figure the adjustments.

RAA (for SP) = (LgRA*1.025 - RRA)*IP/9

RAA (for RP) = (LgRA*.951 - RRA)*IP/9

You can check my math as the weighted average of the adjustment is 1.025(.665) + .951(1 - .665) = 1.0002.