Lawler’s Law: How Accurate Is It, And Can It Be Improved?

If you’ve ever watched a Los Angeles Clippers game on NBA League Pass or on local TV, you’ve probably listened to play-by-play man Ralph Lawler.  And if you happened to have stuck around toward the end of that Clipper game (which was an especially painful thing to do during the ’80s and ’90s), you may have learned of Lawler’s Law.

The first team to score 100 points wins.

- Lawler’s Law

Lawler’s Law is presented tongue-in-cheek (“it’s the LAW”), but obviously exists more as a guideline than a law.

I’ve always wondered how accurate Lawler’s Law is, so I did the math on play-by-play data for games over the past 3 seasons.

It turns out, Lawler’s Law is pretty accurate.  Teams that scored 100 points first ultimately won the game around 93.5% of the time.  Doing some Googling, other sources have the accuracy at around 91-92%.  That difference could be statistically significant, but isn’t practically much different, and could probably be attributed to my smaller, recent sample of games.

I don’t know what I expected, but I found that percentage to be high, and surprisingly so.  That led me to ask a follow-up question: does a better Lawler’s Law exist with a score threshold other than 100 points?  After all, 100 points is an arbitrary threshold.  It looks nice because that’s when we move from 2 digits to 3 digits in the base 10 numbering system.  But it’s entirely possible that a less sexy number like 101 points could be a better threshold that predicts the winner of the game.

So I went through the same process for a bunch of different scoring thresholds.  And it turns out that you can do better than 100 points.  The higher threshold you choose, the more likely that the first team to hit that threshold will be predictive of the ultimate winner.  A threshold of 101 points is better than 100, and 119 points is better than 101.

correct_vs_threshold

But that result seems obvious, because at some point it becomes difficult to score a certain number of points.  Looking again at the data, a typical team scores somewhere around 95-97 points a game (Average final team score is 97ppg.  Calculating in a different way, you can get 95ppg, based on an average of 80 shots per game, 0.98 points per shot attempt, with 23 free throws at a 75% rate).  Scoring 100 points isn’t far-fetched, but scoring 115 proves more difficult, usually because the team must have shot very well over the course of the night, which doesn’t happen very often.  But how often does that occur for various scoring thresholds?

As you can see from the graph below, nearly all games end with at least one team breaking the 80 point barrier, and over 90% of games with one team breaking 90.  But that percentage falls precipitously as we move into the 90-something point thresholds.  Only about 62% of games ever reach 100 points, meaning Lawler’s Law applies for about 3 of every 5 games.  Notice how the 100 point threshold falls in the middle of the steeply downward-sloping portion of the graph.

games_vs_threshold

Here we see the downside of Lawler’s Law: it only applies some of the time.  Having a 93.5% predictive accuracy for 3 of every 5 games, but having no opinion on the other 2 games doesn’t seem all that predictive.  In fact, you could argue that the true accuracy of Lawler’s Law can be no higher than 62%.

So what if we judged Lawler’s Law not only on the truthfulness of the Law itself, but also by the number of games that the Law is applicable?  Or more simply, what if we defined accuracy (“total accuracy”) as the number of games the Law correctly predicts, divided by the total number of games (regardless if the Law is applicable)?

By essentially combining the two previous graphs, we get this graph below.  Notice how total accuracy slowly increases as we move from scoring thresholds in the 60s to those in the 70s.  This makes sense: over 99% of games have at least one team hitting at least 79 points, and accuracy of Lawler’s Law increases from 80% accurate at 69 points to 84.4% accurate at 79 points.

new_metric

Moving to scoring thresholds in the 80s, total accuracy flattens out, reaching a high of 84.8% accuracy at 82 points and 87 points.  Practically speaking, total accuracy doesn’t differ much for scoring thresholds in the 80s.  Accuracy continues to get better, with the 89 point threshold being nearly 90% accurate, though only 93% of the games ever get to 89 points.

At this point, losses in applicability begin to dominate gains in accuracy, and you see total accuracy fall when we move into the 90-something point thresholds.  Judged by this new metric, Lawler’s Law at the 100 point threshold has a total accuracy of just 58.9%.  Again, Lawler’s Law is 93.5% accurate, but only becomes applicable for 62% of the games.

Using the total accuracy metric, we find that we can improve Lawler’s Law.  Total accuracy is maximized in the 80-something point thresholds, and because there is very little practical difference in accuracy among them, let’s choose the smallest one and restate the Law.

The first team to score 80 points wins.

- Lawler’s Law, version 2.0

Under the new Law, you’ll be right only around 5 in every 6 games, or stated differently, you’ll be WRONG about 15% of the time, which doesn’t sound nearly as cool as being 93.5% right.

But Lawler’s Law wasn’t designed to be 100% truthful: that’s part of the joke.  The Law is a pretty good approximation for the truth, and for most purposes, that’s good enough.  Statistics don’t always have to be absolutely precise.  I love Lawler’s Law because it isn’t perfect but does the job of predicting winners effectively and cheaply, living comfortably with all its flaws and error rates.  Sometimes, good enough is good enough.

  • Yam

    Awesome analysis!

  • Asdf

    Not sure if it would change much, but would adjusting Lawler’s Law to account for game time at which the threshold is reached lead to a more accurate measure?
    As in if the threshold of 80 points is right 85% of the time, what if a team reaches 80 points at the 38 minute mark vs the 46 minute mark?

    • vorpedadmin

      Indeed. Taking time remaining into consideration would lead to a more accurate prediction model, and this type of analysis starts to veer into win probability territory, of which there are many great analyses, like this thread on apbr: http://godismyjudgeok.com/DStats/APBRmetrics_Old/viewtopic.php@t=586.html

      It’s not exactly the same as the analysis you proposed, but you get the idea.

  • crimulus

    Yes game time and point differential at the threshold would be interesting aspects to add to the equation. Obviously the lower-score team at 100-98 wins more often than at 100-80 (or 80-78/80-64)

  • eazup

    how does the accuracy fair when applied to point differential when 100 is reached?
    as in- if point differential is more than 10 when 100 is reached by either team, there is a 99.8% accuracy (guessing as an example)
    but when the differential is 2, the accuracy is only 78%

  • Electricity Market Analyst

    Hi, love this post. I noticed your x-axis doesn’t go below about 65 points as a threshold, because both teams virtually always score more than that. But I’m really interested in the lower thresholds. Specifically, how low of a threshold do you have to go for it to be really meaningless, i.e. a win probability around 50%? Frankly I was pretty surprised that first to 65 points still gives around an 80% chance of winning.

    Presumably first to 2 is pretty meaningless, but is first to 40 already somewhat meaningful? First to 30? 20? Or maybe I’m wrong and even first to 2 will give around 50% chance of winning. I have no idea but would be really interested to see your first graph with the x-axis starting at 1 or 2.