The idea of market inefficiencies has been beaten to death since the inception of Moneyball. It's been more than a decade since the A's improbable winning streak though, and the secret on OBP is out now. So how do teams try to better themselves with as little money as possible? One of the simplest ways might be simply finding a better manager and general manager, especially given how little they are paid relative to the players on the field. But how does one evaluate the success of someone who doesn't play on the diamond, and thus does not have directly measurable statistics on a nightly basis? Earlier this year, Foster Honeck of Fangraphs evaluated general managers by looking at a team's success based on their given payroll. While there are of course several other factors that can affect a team's success besides the payroll size, these should tend to the averages given a large enough sample size. Today, I hope to evaluate managers in a similar way -- by examining how well a manager prepares a batting order.
First, in this model, we assume that the number of hits a team produces is completely independent of the batting order chosen -- that is, players don't perform differently based on their lineup spot. While in practice, this may not be true, it's reasonable to assume that the total number of hits shouldn't change too much as a result of the names on the lineup card. The manager's goal is to produce a lineup card that yields the highest number of runs based on the number of hits their team produces. Much of this is very related to batting well with runners in scoring position, but that should be the manager's goal in creating a lineup, so that the likelihood of batters stringing hits together increases, therefore increasing hits with RISP.
Here, we can see that the Tigers are meeting expectations fairly well, as their point is right on the line of best fit. That means that the total number of runs they have produced this season is very close to the number of runs we would predict them to score based on their total number of hits. As a contrast, the Athletics have been outperforming their expected number of runs more than any other team, while the Diamondbacks have been underperforming more than any other team.
This example is a bit silly though, as it would be much better to predict a team's number of runs based on more factors, like whether the hit was a home run or a single, as well as include walks. While we can't make a pretty scatterplot based on such a model, we can still evaluate the predicted number of runs on a number of factors. Specifically, I will use the number of singles, doubles, triples, home runs, and walks.
Thus, to evaluate how well a manager creates the batting order, we will look at each team's (Actual Runs) - (Expected Runs). These results are summarized in the table below:
|Team||Actual - Expected|
Looks like Joe Maddon's lineups haven't been working out for him so far this season, which might have led him to the Tommy Tutone order last night. The Tigers seem to be on par for the course, who are just below their expected total so far. But has the Ausmus era been an improvement? We can compare to the results of 2013:
|Team||Actual - Expected|
Taking all of Jim's lineups into account last year (including all of the Sunday lineups and lineups with Don Kelly batting 3rd), it appears that his team generated more than 22 fewer runs than they should have, by this model. While the remainder of the 2014 season is yet to be determined, it appears that Ausmus is doing much better than Leyland in this regard.
Some other interesting points:
- Ron Washington may be dumb enough to intentionally walk Cabrera to get to VMart, but it appears he sets a pretty decent lineup. (That, or they just hit lots of HRs in Arlington. HRs tend to score more runs regardless of the batting order)
- Despite the accolades Joe Maddon receives for his managing prowess, this measure doesn't seem to favor him too well.
- While there are some notable differences between the two seasons (especially the Cardinals), I'm surprised at how many teams have similar rankings from 2013 to 2014.
If I find time to build/collect a data set of managers from the previous years, I'd like to be able to aggregate several seasons for one manager and run a regression analysis on this. As previously mentioned, a team's "clutchiness" can clearly skew the results here, as teams with many hits with runners in scoring position will outperform their expected runs, so collecting several seasons for different managers may reflect more upon the manager and less upon the team. Hope you were able to enjoy and understand this analysis, and I look forward to potentially writing more!