Last week, I started the Saber-Terms series, with the goal of defining a lot of the things that I use to discuss baseball. As I mentioned, I do something very similar (but far more in-depth) over at my other site, Intro to Sabermetrics 101, but I think it is important for you to get some of these definitions at a place that you may be more familiar with. If you want more details, check out my other site, and tell your friends.

I discussed wOBA last week, a fitting start since I always start player discussions on offense, and wOBA is the primary way that most current sabermetrically-inclined bloggers discuss offense. Of course, wOBA is simply linear weights converted into a rate stat, so if you really want to know what goes on in the process of modeling offensive run production, I suggest you take a look into linear weights. Of course, I’ve already got that covered for you at Intro to Saber; if you’re interested in learning more on linear weights, check it out.

I thought, since I started with a discussion about runs, it would be pertinent to continue with a discussion on how we convert runs into wins (ultimately the important part) and why we even bother to do all of this. An explanation of that can also be found here, but for this article I will talk more about why Pythagoras for some reason is important to us baseball geeks.

**OK, it’s not really about Pythagoras**

The reason the old Greek philosopher/mathematician/divine being (according to his religious followers) is even mentioned in a baseball-related sense is that the sabermetric philosopher-king himself, one * Bill James *(I thought the bold italics were appropriate given his status) likened his initial runs-to-wins formula to the Pythagorean formula of right triangles and other such math.

The original Pythagorean expectation equation was developed by Bill James, but the idea of estimating wins from runs dates back further. Previously, linear methods looking to find a slope that would relate runs to wins seemed reasonable at the MLB level, but as many linear methods do, they failed at the extremes. Most notably, it was possible with some of these methods to yield win percentages above one and below zero, both improbabilities in real baseball. The ratio method for doing this was first popularized by James. The commonly known Pythagorean equation is as follows:

Expected Win% = Runs Scored^2 / (Runs Scored^2 + Runs Allowed^2)

It can also be rearranged to show this:

Expected Win% = 1 / (1 + (Runs Allowed/Runs Scored)^2)

**Why does this work?**

James initially designed this from observational analysis, but apparently research has shown that this correlates runs to wins very well, within the standard error of actual wins for a team. There’s a lot of statistics involved, so I won’t get too much into it, but Wikipedia has the article linked, so check it out if you’re interested. Something involving a Weimull distribution.

**The Exponent**

Later on, James said an exponent of 1.83, rather than 2, was more accurate. However, work on the subject has aimed to tailor the exponent so that it can fit a wide variety of run environments. As with many of James’ models based on empirical analysis, Pythagorean expectation works very well in the confines of a typical league run environment, but starts to break down at extremes. However, the ratio model of

W% = **R ^ x / (R ^ x + RA ^ X)**

still appeals as the basis of the “correct” formula because it maintains the restrictions of 0 and 1 (no win% greater than 1 or less than 0). Thus, work on this began. Baseball Prospectus’ Clay Davenport initially designed what became known as the Pythagenport version, using this exponent:

Exp = 1.5 log ((Runs Scored / Runs Allowed) + 0.45)

Davenport’s research claims Pythagenport within run environments between 4 and 40 runs per game (RPG). However, later on David Smyth and Patriot independently determined this exponent formula:

Exp = (Runs Scored + Runs Allowed) ^ 0.287 (the actual exponent here may vary and is usually a matter of taste)

The basis of the formula came from the idea, credited to Smyth, that in a 1 RPG environment, teams would win if they scored one run and lose if they did not score. As a result, the exponent in this case must equal 1, yielding the following result:

W% = R / (R + RA)

This intuitively makes sense, and through regression work both eventually found the fit that involved the run environment and some factor that allowed this to be true. It was found that this exponent, referred to now as the Pythagenpat exponent (Pythagenpat for the whole estimator equation), fit well in the ranges used by Davenport and still fit at smaller values than 4 RPG, including the all-important 1 RPG.

**Why do all of this?**

When we compare players, it is often quite difficult to determine how much better players of similar caliber are compared to each other. A recent example is the **Chris Coghlan** v. **Andrew McCutchen** ROY debate. If you look at OPS, Coghlan was .014 points better. They were tied in OPS+ at 122, according to Baseball-Reference’s park factors. He also had 70 more PA. McCutchen played a more difficult position and was a better baserunner, at least by steals.

How can we determine who was better given these parameters? No one player makes a clear distinction. One played more and may have been a slightly better hitter, the other was a presumably better defender and runner. The problem is they are all in different units. That is why we need things like linear weights to determine *quantitatively* how much better or worse a player was compared to someone else. There’s a lot of work on run estimators to get that answer.

But ultimately, runs are not the most important thing. In fact, wins are the most important thing (you hear that for pitchers all the time, but of course pitcher “wins” are not really only the pitcher’s doing). We need something that can change runs into wins, and that’s what Pythagenpat (the most widely used estimator) does. Thanks to these types of formulas, we can take run totals of all different sorts of measurements and turn them into wins by just by manipulating Pythagenpat.

For example, let’s quickly look at the 10 runs / 1 win axiom I always use. Truthfully, it’s not always exactly ten runs. But if you wanted to know, it’s pretty simple. Take an average team (let’s say that’s 4.5 runs per game). Here’s your exponent:

Pythagenpat exp = (4.5 + 4.5) ^ 0.287 = ~1.88

Your average team has a win% of .500, so that’s 81 wins over 162 games. What’s the win% of 82 wins? .506. Use an average number of runs for Runs Allowed and find the number of Runs Scored. Plug it in and solve:

0.506 = (1 / (1 + (729 / Runs Scored) ^ 1.88)

Solving for RS gets you **Runs Scored = 738.37**. The difference between that and 729 is 9.37 runs, and that would be the value of a win in this environment. You can do this for any environment fairly easily. In a 5 RPG environment, the run value of a win is 10.1 runs / 1 win.

There you have it. You now see where all this runs-to-wins crap comes from. If you ever want to question it, check out the math yourself. I think I’ll be doing these conversions for NL win totals, because as you can see, the run environment has distinct effects on the value of wins.