Sunday, August 29, 2010

Obligatory Triple Crown Post

With Pujols and Votto in pursuit, the triple crown is all the rage these days. Never missing a good opportunity to write a brainless post, here I am to weigh in on a topic on which more than enough has already been said.

The concern about Omar Infante leading the league in BA (I'm sorry, winning the "batting title") is amusing on a number of levels. It would be moot if the triple crown got with the times and was composed of OBA/HR/RBI. Of course, there are a number of different ways in which one could construct a "better" triple crown, but replacing BA with OBA might be one people would be receptive to. However, the triple crown as it has been historically defined is not going to be wiped from existence in any event.

The triple crown as currently constructed has been captured fourteen times by twelve different players (O'Neill, Lajoie, Cobb, Hornsby twice, Foxx, Klein, Gehrig, Medwick, Williams twice, Mantle, Robinson, and Yaz). This is what the honor roll of OBA/HR/RBI would look like:

Tip O'Neill: 1887 AA
Nap Lajoie: 1901 A
Ty Cobb: 1909 A
Gavvy Cravath: 1915 A
Babe Ruth: 1919 A, 1920 A, 1921 A, 1923 A, 1926 A
Rogers Hornsby: 1922 N, 1925 N
Chuck Klein: 1933 N
Lou Gehrig: 1934 A
Ted Williams: 1942 A, 1947 A, 1949 A
Frank Robinson: 1966 A
Carl Yastrzemski: 1967 A
Willie McCovey: 1969 N
Harmon Killebrew: 1969 A
Dick Allen: 1972 A
Mike Schmidt: 1981 N
Barry Bonds: 1993 N

OBA in place of BA would increase the frequency of triple crowns (23 rather than 14), but it would only increase the number of players who have done it from 12 to 16, since much of the change in frequency is due to Ruth. Ruth would gain five triple crowns, while Cravath, Williams, McCovey, Killebrew, Allen, Schmidt, and Bonds would each gain one. Jimmie Foxx, Joe Medwick, and Mickey Mantle would each lose theirs.

I've seen a number of people attempt to develop a "sabermetric" triple crown, and I offered a few of my own ideas on a couple of other sites. My most basic position on the matter is that I don't really care for the concept, because it elevates leadership in particular categories above the contribution suggested by the statistics themselves. By its very nature, any sort of triple crown construct is going to be rooted in trivia rather than value. Also, there is no way you are ever going to be able to replicate the attention that has been paid to the BA/HR/RBI crown throughout the years. That being said, there are a few basic principles I'd propose for designing a triple crown:

1. The triple crown is based on batting only, with no attention paid to baserunning or fielding. There wouldn't necessarily be anything wrong with considering those elements of the game, and it might make it more meaningful, but it wouldn't be true to the current construct.

2. The triple crown is one rate stat and two total stats, which places value on being in the lineup. It would make sense for a replacement to give similar consideration (rather than something rate-exclusive like BA/OBA/SLG).

3. The categories should be of roughly equal value. BA, HR, and RBI are held in similar esteem in traditional analysis (whether they should be or not). I've seen some people make proposals for a sabermetric triple crown that would use a relatively trivial metric (like baserunning runs) as one of the components. One could do that, but it wouldn't carry comparable cachet. BA/OBA/SLG would undeservedly put BA on equal footing with OBA and SLG.

4. There shouldn't be excessive overlap between the categories. The standard triple crown has a metric which ostensibly measures "production" (RBI), one which measures power (HR), and one which ostensibly measures the rate of overall hitting (BA). While there is overlap, particularly between HR and RBI, there is at least some independence. A BA/OBA/SLG crown would ignore that BA is a major component of OBA and SLG. Something like BA/ISO/walk rate would break the components out.

Keeping with the two totals/one rate construct, and limiting the metric choices to just those that are easily explained and/or already in use, I would offer Times on Base, Total Bases (or extra bases), and OPS--totals for on base and power, and an overall rate.

Even my own proposal illustrates my biggest objection to any triple crown construct--similarly to record based on hits, one component or another is going to favor players that don't draw a lot of walks. Even in 2001, when he hit 73 homers, Barry Bonds did not lead the league in total bases; Sosa and Luis Gonzalez each finished ahead of him. One could get around this by adding walks into TB to make "complete bases"--but that statistics used in conjunction with Times on Base would overvalue walks.

It's very tough to construct a triple crown that isn't rate-based that would recognize Barry Bonds' 2001-04 seasons, which was the best run of sustained offensive domination since Ruth. If those seasons don't earn a batter a triple crown of some sort, it only reinforces that the triple crown is about trivia, not value.

Wednesday, August 25, 2010

Make Waivers Fun

It seems to me that the waiver trading period has become a lot less interesting, particularly this season. It *seems* as if every time a player is placed on waivers, we hear about it. Rather than the mystery of which players have been blocked and which are on the open market being out there, the primary drama seems to occur when we find out a player has been claimed, and then we embark on a Johnny Damon watch. This offers myriad opportunities for the player to be bashed for being a loser (another of the many contradictions contained in the oft-expressed desire for player loyalty) and, from my perspective at least, is not interesting or fun.

Perhaps my memory of what the waiver period used to be like is fuzzy; perhaps it is simply that the existence of blogs and Twitter make it much easier to disseminate information on waiver claims within the 48-hour window. In any event, whether MLB could have a more interesting and more efficient way to award waiver claims is much more important than how the waiver period makes me feel. I'm going to assume for the sake of discussion that trade waivers and the waiver deadline are necessary and should not be eliminated (in other words, extending the non-waiver deadline to August 31 isn't an option).

The fundamental issue I have with the system as it stands is that team's claim priority is determined in reverse order of winning percentage. This makes sense when dealing with general waivers throughout the season--it gives the lesser teams the first crack at talent that becomes available. I contend that it does not make sense when dealing with trade waivers. The vast majority of claims during this period are made by contending teams looking to upgrade their roster. Why should the White Sox get the exclusive right to negotiate with the Dodgers for Manny Ramirez just because they have a worse record than the Rays?

This is not a competitive balance issue--the (talent-)rich are going to be getting richer in any event, because it would make no sense for the Orioles or the Pirates to claim Manny. Instead of promoting competitive balance, giving waiver priority to teams with lower W% punishes success.

There are countless different mechanisms by which waiver claims could be prioritized, without going by W%. I'm going to offer several, of which at least two are inspired by fantasy baseball. MLB could stand to learn a thing or two from the reality-hating losers:

1. Increase the price of making a claim

There is a price for an awarded claim, but it's pretty much negligible for a MLB team. That price could be raised, or it could be assessed for any claim, regardless of whether it was fulfilled.

I doubt this would be a very effective remedy, because the danger of being stuck with the remainder of a player's contract is always going to be a greater financial burden than paying the waiver fee. It also does nothing to address the White Sox/Rays issue, in which two teams would legitimately want the same player.

2. Have a claim awarded, go to the back of the line

This is how waivers work in standard Yahoo! fantasy leagues. The initial waiver priority list is established by some algorithm, and from that point on, any team that is awarded a claim drops to the back of the priority list. This makes me think very carefully about whether or not to claim a player I want on waivers, or try to wait until the waiver period is over and he is a free agent. The White Sox supposedly claimed Trevor Hoffman today, but they may not have done so if it would have prevented them from getting Ramirez at a later date.

3. A lottery

Add some randomization into the mix. Give each claiming team an equal chance, or come up with some weighting of W% or place in the standings, etc. Don't allow a team to make get the claim just because they have the worse record.

This is one of the weaker proposals I've offered. Unlike the NBA Draft Lottery, which serves to reduce the incentive to lose on purpose, there is no moral hazard involved in trade waivers. The White Sox have no incentive to lose games and hurt their standing in the pennant race so that they can have the first shot at Manny to make up the ground they chose to give away. Also, conspiracy theorists would have a field day, especially after the first time the Yankees were awarded a claim.

4. Have an auction with fake money

This is the FAAB model from fantasy baseball. Give each team a budget of $100 to use throughout the waiver period. You really want Manny? Bid your full $100, but you're out of luck when other desirables hit the wire. This combines the strategy of the "go to the back of the line" approach with equal access to players throughout the process.

5. Have an auction with real money

This one would never fly, because it would come close to being a player sale, which Mr. Kuhn in his infinite wisdom decided was not in the best interests of baseball. Let the White Sox and the Rays bid on the right to claim Manny. If the White Sox have the high bid, they can attempt to work out a trade which overrides the fee. Or the second-highest bid could be the amount that they would have to pay the Dodgers if Manny wasn't pulled back. Of course, there is any number of possible permutations for any of these suggestions. For the purposes of waiver claims, either auction model would probably work better if they were silent auctions.

I'm not saying that the ideas sketched out here are perfect; I'm sure that creative people could come up with twists that would make them better. I do think that with the exception of the lottery and increased claim price, any of these ideas implemented intelligently would be a fairer way to handle the waiver process and add a lot of intrigue to the process (even if it was only behind the scenes).

Sunday, August 22, 2010

Rudimentary Team Fielding Metrics

If you divide baseball into offense, pitching, and fielding, there's no question that fielding is the one I spend the least amount of time on as a sabermetrician. Just look at the labels on the side of the page; as of this writing I have 61 posts labeled "Offense", 24 labeled "Pitching", and just 3 labeled "Fielding". This even understates it a little, since none of the fielding posts include any new ideas put forth by me, and because a lot of what is classified as "Offense", like run estimators, is equally applicable to the defense, but as a whole.

It's not that I don't think fielding is important to winning ballgames. It's not that I think sabermetrics has got fielding all figured out. It's just not a topic that I have ever had much to contribute towards.

One of the major reasons for this is the same reason that I don't do any work with Pitch F/x data--I don't really understand it well enough to come up with anything useful. I love algebra and probability and statistics and most calculus; I hate geometry and trigonometry and those calculus problems in which you try to figure out the volume of a cylinder rotated around the y-axis. Have a problem which requires use of the quadratic equation, the binomial distribution, or partial differentiation? Sign me up. But the minute you start tossing around polar coordinates or angles, I'm just as math-averse as the average old-school sportswriter.

This limitation can be quite an inhibition when it comes to being on the cutting-edge of fielding or pitch analysis, so I stick with the topics in which one can safely avoid any angles or sines. I was reading an article the other day in which "3-space" was mentioned, and it was the first time in reading a sabermetric piece that I could relate to the guy who says "I like baseball, not math."

None of that is intended to in anyway suggest that the research being done in those areas is less important than the things I write about. It's more of an apology for the rudimentary nature of the team fielding metrics that follow.

The impetus for this post is that I wanted to add a couple of team fielding metrics to my end-of-season stats, just to make it clear that I realize fielding is part of the game. The philosophy with those stats has always been to stick to either official categories or things that are easy enough to find otherwise (like doubles and triples allowed, or inherited runners). So any of the advanced fielding metrics are already disqualified from inclusion, and even if they weren't it would be pointless because I would just be copying someone else's work so to speak.

So, limiting the scope of categories available just to the official and semi-official categories, what can one do about team fielding? Obviously there's Defensive Efficiency Record, which is very important even in the PBP fielding age. There's team Fielding Average, which is not particularly useful but is still widely cited in the mainstream. You could do something with double plays, passed balls, or stolen base percentage and after that the pickings are fairly slim. I've passed on double plays because they are highly correlated with the groundball tendencies of the pitching staff, and to look at them without that context would be misleading at best.

As a result, I'm including just three categories: DER, a modified fielding average, and a rate of wild pitches and passed balls:

(1) Battery Mishap Rate (BMR)

It is hardly a novel idea to combine wild pitches and passed balls; while I was working on this post, by chance I stumbled upon Bill James describing the distinction between WP and PB as the "silliest distinction in the records" in 1988. I agree with him, so BMR is simply the ratio of WP and PB to baserunners, multiplied by 100:

BMR = (WP + PB)/(H + W - HR)*100

A battery mishap can occur without a baserunner (a mishandled third strike that allows the batter to reach) but baserunners make more sense as the opportunity factor than anything else. The highest team BMR of the last twenty years (1990-2009) was 6.0, by the 1993 Marlins (a wonderful combination of a knuckleballer with an expansion team); next is the 1990 Yankees (5.8, without any such easy excuse). The lowest rate was 1.5 by the '92 Padres, and the average was 3.4 overall and in 2009. The lowest BMR was 2.0 (BAL); the highest was 5.4 (KC).

(2) Modified Fielding Average (mFA)

Fielding Average has many issues, foremost of which is that it is built on the silly distinction between a hit and an error. Still, it's not going anywhere and it won't hurt anything to list it on a spreadsheet.

A really easy alteration to traditional FA is to remove strikeouts, since they are generally easy putouts with little opportunity for errors. In fact, the most common mishap on a strikeout is a wild pitch or a passed ball, and thus not scored as an error at all. So we can define kFA (strikeout-adjusted FA) for a team as (PO + A - K)/(PO + A - K + E).

I think there's another modification that's simple but justified, and I wouldn't be surprised if someone has already proposed it, although I couldn't find anything in a quick search. Consider this theoretical inning:

1. 6-3
2. E5
3. 6-4 fielder's choice
4. 5-4 fielder's choice

Three putouts, one error, three assists = .857 FA

And this one:

1. fly to 8
2. fly to 9
3. E4
4. 6 unassisted fielder's choice

Three putouts, one error, no assists = .750 FA

Team A is credited with a better FA (ostensibly a lower error rate) than Team B, but does this really make sense? Each team recorded three outs and made one error. In the first case, plays were completed by assists, while in the second all plays were made unassisted.

It's possible to make the case that plays involving assists take more skill, generally, than those that don't. But even someone taking that position would have to admit that there are many cases in which there is no meaningful distinction (such as the fielder's choices with and without assists). In some cases, like a first baseman with bad knees flipping to the pitcher, or a rundown involving more players than necessary, the assist is actually indicative of a poorer fielding outfit.

The practice of including assists in fielding average appears to me to be a reflexive application of the same formula that one would use for individuals. There's no reason why the same formula must be used for teams as well. Considering the number of players that handle the ball obscures the fact that the goal of a team in the field with respect to errors (which isn't really the goal at all) is to make as few errors as possible while recording outs, not collecting chances.

So I offer a modified FA for teams:

mFA = (PO - K)/(PO - K + E)

One thing I should note is that there's a decent case to be made for looking at the complement of FA--making errors the numerator rather than putouts. Since all the numbers are clustered in a small range in the upper .900s, they look better on paper clustered in a small range less than .050.

For most teams, using mFA makes very little difference. For 1990-2009, the correlation between kFA and mFA is +.994. Over that period, the average team has a ratio of .51 assists per (PO - K), ranging from .43 ('02 MIN) to .59 ('03 LA). mFA correlates better with DER, but not significantly so. The teams with high ratios would generally have been teams that got more groundballs, and as a driving factor for why kFA and mFA diverge, there is a messy and intertangled relationship between mFA, DER, and overall team defense (including pitching).

Even if one believes that plays involving assists should be given extra weight, do you really think the appropriate weight on assists is one, double-weighting those plays in establishing the opportunity factor for errors? mFA weights them at zero; perhaps it would make more sense to use, say, .3, but one seems excessive in any event.

One might ask why BIP is not the denominator. The drawback to using BIP is that a team's error rate would be reduced by allowing a hit. It makes more sense to combine errors and hits as failures and compare them to BIP--which is exactly what DER does.

The average mFA over the period and in 2009 was .967. The highest mFA was .981 by Seattle in 2003; they ranked third in standard FA. The top nine teams in FA rank are also the top nine in mFA, although in different order. The lowest mFA was .951 by the 1992 Dodgers--that's what happens when Jose Offerman plays short and makes 42 errors. That Dodgers team was second-to-last in traditional FA; the opposite combination is true for the 2009 Nationals.

The team whose ranking improves the most by using mFA is the 1992 Tigers (.981 FA, .969 mFA). They recorded just 693 strikeouts, the fewest of any team in the period in a non-strike season. The team with the biggest drop in ranking is the 2003 Cubs (.983 FA, .965 mFA), and they struck out more batters than any team in the period. Strikeouts pad the putout total and obscure the true error rates of fielders when they are included in fielding average.

Here are the 2009 team figures, with ML rank in FA and mFA, sorted by difference in ranks. Positive differences indicate teams that rank higher in mFA than in FA:



(3) Defensive Efficiency Record (DER)

Given that Bill James' DER is the most-widely used measure of team fielding, you'd think his original formula would be easy to find online, or in one of the STATS or Baseball Info Solutions publications James contributed to. You'd be wrong. I'm sure it's out there somewhere, but it's not easy to find, so I saved time by rummaging through the closet to dig out my one of my Abstract copies.

DER is the percentage of balls in play that are converted into outs, and James used two estimates to establish the numerator of plays made. One used putouts as its starting point; the other begins with plate appearances. The second is now used by most analysts, as the data is more accessible and it is also for all intents and purposes the complement of BABIP, the metric whose behavior is at the heart of DIPS theory.

I'll use that second estimate exclusively as well, but for completeness, the first formula is:

PM1 = PO - K - DP - 2TP - CS - ofA

This estimate assumes that a putout occurs on a batted ball unless it's a strikeout, or multiple outs are recorded on the same play (DP, TP), or it is a baserunning out (CS, ofA).

PM2 = PA - K - H - W - HB - .71E

The second estimate assumes that every batter is out on a ball in play unless he reaches safely (H, W, HB) or on an error (ROE is estimated to be 71% of total errors), or he strikes out.

PM is then figured as the average of PM1 and PM2, and DER follows:

DER = PM/(PM + H - HR + .71E)

I have gone with the PA form, as many others have--it takes a lot more effort to run down team outfield assists, and the two estimates are always very close.

For 1990-2009, here are the correlations between each of these metrics, plus Run Average, Unearned Run Average, and W%. I'm not offering this table up as being analytically important, and some of the correlations are silly--BMR with DER, for instance. The computer spits them all out, though, so I might as well list them:



As always, you have to be careful when interpreting correlations of this sort, and not putting too much stock in them. mFA and kFA have weaker correlations with W% and RA than FA, but that is not unexpected and tells us next to nothing about their performance as measures of fielding. Removing strikeouts isolates fielding results, but removes valuable information about how good the team was at defense overall (defense defined as pitching + fielding).

Friday, August 20, 2010

Monday, August 16, 2010

Equivalent Winning Streaks

Disclaimer: What follows is most certainly a freak show stat. It's a freak show stat that deals with something that isn't very important to begin with.

The longest winning streak in the majors in 2009 was eleven games; both Boston and Colorado put together ten-game skeins. Washington won eight consecutive games at one point. Which streak was the most impressive?

The obvious response is "What, are you nuts? Of course an eleven-game winning streak is more impressive than an eight-game winning streak. Why would you even ask such a question?" Truth be told, this is the response I would probably give if somebody asked me this question

So let's suppose that instead I asked you "Which streak was more likely?" Now that's a question that a sabermetrician or anyone else with an appreciation for probability can embrace. With some simple assumptions (independence of games and a constant team W% regardless of opponent or other factors), it's a very simple computation. Just take the team's W% and raise it to the nth power, where n is the number of games in the streak.

I'm going to assume that each team's 2009 W% is equal to their true, constant W%. Obviously I don't actually believe this, but in this case I'm trying to identify unusual streaks given the way the team actually played. If we believe a team is a .525 in true talent, but they actually played .575 baseball, then we are going to expect some longer winning streaks in retrospect. If we wanted to know the probability of a five-game winning streak in the next five games, then it would definitely be inappropriate to use .575 instead of .525.

That is not quite a satisfactory defense of treating the 2009 W% as the true W%, because if we treat all results as known, then we run into what you might recognize from probability class as an urn problem--for some reason they love to use colored balls in urns as examples in stats texts. If you start by taken a team's record (say 81-81) as known, then if you "draw" one game from that sample, there's a 81/162 chance it was a win. But if you draw a second game, the win probability drops to 80/161. Treating the games as independent is equivalent to replacing the balls that you draw, and it's easier, and if you assume that we know the true quality of the team (which is their actual record) but not their actual record then it's justifiable.

Back to the Red Sox and the Nationals. Boston had an eleven-game winning streak, and they were a .586 team, and thus the probability that they would win eleven in a row (over any particular eleven game segment, not for the season as a whole) was .586^11 = .28%. Washington won eight straight, but with a W% of just .364, there was only a .03% chance of an eight game win streak. Given their overall record, Washington's streak was the most unlikely in the majors in 2010.

If you are asking "Who cares?", you are not alone. This is quite admittedly a freak show stat.

There is a more elegant way to express those probabilities, on a scale which you probably find a lot easier to wrap your mind around than hundredths of a percent. We can express it as a winning percentage of equal length by an average (.500) team. Just ask the question "How many games would a .500 team have to win in a row for there to be a .03% chance of that streak occurring?" If you accept fractional games, then this is a pretty easy math problem:

.5^x = prob(streak)
where prob(streak) is for the observed streak, and is equal to W%^y, where y is the length of the streak (so for the Red Sox, W% = .586, y = 11, and prob(streak) = .586^11 = .0028)
so
log(prob)/log(.5) = x

x is the equivalent winning streak (i.e. equally likely) for a .500 team. For the Red Sox, this comes out to 8.5 games; for the Nationals, 11.7. Boston's winning streak, despite being tied for the longest in MLB by games, ranks only fifth by equivalent winning streak.

The shortest maximum win streak in 2009 was four games by Houston, but the Cardinals had the shortest equivalent winning streak, just 4.2 (a five game streak in reality). The Yankees longest winning streak was nine games, which given their .636 W% ranks seventh-to-last in equivalency (5.9 games).

One can of course also turn this around for losing streaks by simply using the complement of winning percentage (losing percentage, if you will, L/(W + L)) in place of W% in the formulas. The losing percentage of a .500 team is, of course, .500, so the constant does not change.

The longest losing streak in MLB was thirteen games by the Orioles, but the Rays' streak of eleven was the longest in equivalency (besting BAL 11.6 to 9.4). The shortest longest slides were four games for the Brewers and Angels; given the Brewers .494 W%, they have the shortest equivalent streak (3.9 games).

I've attached a table with the actual max win and loss streaks and the corresponding .500 equivalents for each team in 2009. Again, this has absolutely no analytical value and I'm not claiming that it does.



We could generalize this a little more by playing fast and loose with the normal approximation to the binomial distribution. Suppose a .550 team goes 6-14 over a twenty-game span. What is the equivalent performance by a .500 team (the number of games they would win which is equivalent to a .550 team winning less than or equal to < = 6)? I am not a stats professor, so use the approximation with care. There are some rules of thumb out there for when it is acceptable to use the approximation (n > = 20, np and n(1 - p) must both be > = 10, and others), and I can't guarantee you that what I'm doing here is kosher. But it's a freak show stat, remember, so I'll pretend from this point as if there is no issue at all.

Let n be the total number of games in question (here it is 20) and let p be the probability of winning a single game (.550). Then:

mean = np = 20*.55 = 11
std = sqrt(np(1 - p)) = sqrt(20*.55*(1 - .55)) = 2.225
z-score = (Wins - mean - .5)/std = (6 - 11 - .5)/2.225 = -2.47

The minus .5 is the continuity corrector, since we are converting between a discrete (binomial) and continuous (normal) distribution. Basically, we consider x wins to occur on the range from x - .5 to x + .5.

Now we need to convert the z-score to equivalent wins for the .500 team:

z-score = (EqWins - mean' - .5)/std'

mean' = .5n = .5*20 = 10
std' = sqrt(n*.5*(1 - .5)) = sqrt(n*.25) = .5*sqrt(n) = .5*sqrt(20) = 2.236

so

z-score*std' + .5 + mean' = EqWins
-2.47*2.236 + .5 + 10 = 4.98

So the .550 team going 6-14 over a 20 game stretch is *roughly* equivalent to a .500 team going 5-15.

Thursday, August 12, 2010

Great Moments on the Yahoo! Scoreboard


What Jose Bautista fluke season?

Wednesday, August 11, 2010

Great Moments in Yahoo! Statistics


Column align fail.

Tuesday, August 10, 2010

Great Moments in Yahoo! Box Scores



This error seems to happen when the player involved does not yet have his own player page. Mateo was just called up, and so he doesn't have one yet. At least I think that's what is going on.

Maybe I shouldn't try to explain these, though--it's more fun to just let the magic happen.

Tuesday, August 03, 2010

Meanderings

* When the Indians played at the Phillies in June, multiple outlets reported that this was the Indians' first-ever regular season game in Philadelphia. It seems that the source of this was the Indians' own media notes on the game, and many of the outlets simply passed along this information. That's reasonable enough--one should have a reasonable degree of confidence in the information put out by the team, and not have to fact check everything.

On the other hand now, I would hope that someone would recognize that the claim the Indians had never before played in Philadelphia is absurd on its face. The Indians shared the American League with the Philadelphia A's for fifty-three seasons.

Of course, the note could have been easily corrected by changing it from "in Philadelphia" to "at the Phillies". My incredulity, mild as it is, comes not from the incorrect tidbit of trivia, but from the ignorance of history on behalf of people report about baseball for a living.

Granted, it's not particularly relevant to reporting on modern baseball to know about a team that hasn't played in that city in nearly sixty years, and it could just be a simple oversight, which we are all prone to at one time or another (quick: find a factual error in this post!) On the other hand, it has always seemed to me that people who make their living writing about or commenting about or compiling data about baseball would be big enough baseball fans to be aware of a franchise that won five World Series and featured luminaries like Connie Mack, Lefty Grove, Eddie Collins, Home Run Baker, ...

This phenomenon is not limited to the specific example of the A's--it's something that gnaws on me occasionally when I hear people talk. The Indians' play-by-play man, Tom Hamilton, is sometimes revealed to be somewhat ignorant of National League rookies or Korean and Taiwanese baseball, for instance. The best defense of professionals might be that they do this for a living, and so rather than being a fun diversion, it's a job. It still seems a little odd to me.

* I could be way off base on this next observation--it certainly wouldn't be the first time that I made a faulty generalization. (Also, I don't want to get bogged down in the identities of the players discussed. If you think I've mischaracterized the career of George Brett, then feel free to substitute someone else in his place). However, it seems to me as if the average fan, when evaluating great players, is not draw to the extreme poles of peak or career but rather to extreme performances on either pole. So he won't pick Sandy Koufax as one of his top pitchers, with his career-preferring double picking Nolan Ryan--he'll pick both Koufax and Ryan, because he's impressed by the extreme peak and the extreme career.

The result of this is a bewildering middle ground, in which Pete Rose and Sandy Koufax are simultaneously voted on the All-Century team (I am not saying that the silly All-Century vote confirms my theory; that vote would be completely consistent with people belonging to extreme career or extreme peak camps, and simply balancing each other out in the public at large. It also did not boast a particularly well designed voting system or an informed electorate. I may be using it as evidence, but am admitting that it could easily be used by someone arguing against me as well, or dismissed as meaningless). If you are an extreme career voter, then there's no way in hell you can believe that Sandy Koufax was one of the ten best pitchers of all-time. And if you're in the extreme peak camp, it makes no sense to believe that Pete Rose is the sixth-best outfielder of all-time.

However, if what is going on is that fans are impressed by one extreme or the other, then picking Koufax and Rose can be explained. I still don't think it makes a lot of sense, because taking a dual extremes positions excludes the players who were really good in their peaks and really good in their careers--which is the bulk of the great players in history.

The other complicating factor is that the average fan when evaluating a career probably looks at bulk totals rather than baselined value. When I talk about career value, I almost always mean career value above replacement. Just staying around and compiling only counts to the extent that you can exceed replacement level, and so the last few years of Pete Rose's career have no impact on my evaluation of him. But for those who are looking at career through the lens of totals, Rose's last years often are a strong positive (4,000 hits!)

The very greatest, of course, had tremendous peaks and tremendous career totals--Cobb, Ruth, Bonds, etc. There are some great players that had very good peaks but extraordinary career totals--Hank Aaron, for instance. There are some great players that had great peaks but only good career totals--say Pedro Martinez. And then there is the bulk of great players--guys like Mel Ott or George Brett . If you draw up your list by either extreme, these guys are not going to be at the very top. But if you evaluate by some combination of peak and career, these guys will rank comfortably ahead of one-trick ponies like Koufax.

* The five no-hitters thrown continue to be one of the driving factors behind the "Year of the Pitcher" storyline that the media has run with. But what is the probability of observing five or more no-hitters in a season?

I was going to write about applying the Poisson distribution to this question, but happily discovered that there have been multiple pieces that already covered that ground (see Bob Brown's article "No-Hitter Lollapaloosas Revisited" in the 1996 Baseball Research Journal, this post at Bayes Ball (great blog name, BTW), this one at Tom Flesher's blog, and this paper by some folks from Middlebury College's Econ department).

Since these folks have already done the legwork, I'm not going to offer a justification for this approach (they are much better qualified to do so in any event, so I'll refer you there). Based on the data in Flesher's post, the observed probability of a no-hitter from 1961-2009 is 120/201506 = .0006.

So far this season, there have been 3,166 games played in the majors (through 8/2, and counting each game twice since each is an opportunity for a no-hitter). The mean is .0006*3166 = 1.9 no-hitters. The Poisson probability for x observations is:

P(x) = e^(-mean)*mean^x/x!

So:



The first column gives the probability of observing x no-hitters; the second column gives the probability of observing at least x no-hitters. You can see that there is a 3.1% of observing exactly five no-hitters and a 4.4% chance of observing exactly five no-hitters, at least based on this Poisson model with a .0006 probability of a no-hitter in any individual game.

Of course, this model assumes that the probability of a no-hitter is fixed at the observed level over the last fifty or so seasons, regardless of changes in league environment. To crudely estimate a more 2010-specifiic probability, consider that the overall ML BA is .2594; the 1961-2009 average was .2597. This season is pretty much in line with the average BA from which the sample data comes.

Of course, the sample is just that, so we can figure a rough probabilistic estimate of no-hitter frequency. If a pitcher needs to record y outs to get a no-hitter, and each at bat is treated as independent, then the probability of a no-hitter is (1 - BA)^y. Of course each at bat is not truly independent, and each batter doesn't have the same BA, and you can add some other objections.

If you use 27 as y, you will definitely underestimate the frequency of no-hitters, as many no-hitters don't actually require 27 batting outs--there are outs made on the bases, outs made by sacrifices which don't figure into BA, etc. If you do what I am going to do, and set y = 25.2, which is the average number of batting outs per game, you're not considering that there are generally less non-batting outs when there are less baserunners.

Using .2594 and y = 25.2, the estimate probability of a no-hitter is (1 - .259)^25.2 = .00052, which over 3,166 games yields a mean of 1.64 no-hitters. Using that mean, the Poisson probabilities are:



Neither of these approaches is foolproof, but they both indicate that it is not extremely unlikely to see five no-hitters over 3,166 games.

As an aside, it's well-known that the Mets have never had a no-hitter in franchise history, covering 7,742 games (again, through 8/2). Using the Poisson approach and a .0006 probability (which ignores the quality of Mets pitching, park effects, etc.), the mean is 4.65, and the probability of zero is just .961%.

Using a binomial approach, the probability of zero is (1 - .0006)^7742 = .959%, and you can see that the Poisson matches the binomial very well. It is much easier to work with, though, especially when dealing with non-zero observations. If we wanted to know the probability that the Mets had pitched seven no-hitters, we'd have to compute C(7742, 7)*(.0006)^7*(1-.0006)^(7742-7), which a spreadsheet can handle but it's a big mess. The binomial estimate for the probability of seven Met no-nos is 8.90%, which is the same as the Poisson estimate.

I'll leave you with that; probability of x no-hitters for the Mets franchise: