I have always had a bit of a problem with the question “what’s the probability of success?”. Is this patent going to be granted? Are we going to win this proposed litigation? Clients quite reasonably want to know, and most people accept there is some uncertainty. But when pushed, how do we put a number on it? I might have a case that I think we are “probably going to win”, but is that a 51% chance of success, 60%, 70%?
What does the probability of success actually mean?
Some years ago I read the famous Feynman lectures on physics. Lecture 6 is about probability. For some reason, the following observation has always stuck in my mind:
It is not clear that it would make any sense to ask: “What is the probability that there is a ghost in that house?
Why is that? There might be quite a few problems when it comes to ghosts, but Feynman’s core point is about the potentiality of repeated observations:
By the “probability” of a particular outcome of an observation we mean our estimate for the most likely fraction of a number of repeated observations that will yield that particular outcome… we may speak of a probability of something happening only if the occurrence is a possible outcome of some repeatable observation.
The problem with ghosts is presumably that they can’t be observed at all, but the problem of repeatable observation occurs in the more worldly domain of patent attorneying. In a weak case of design infringement, for example, I might advise “I think we will probably lose, but you never know some judge might say the overall impression is the same, so say about 20% chance of success”. That doesn’t sound very optimistic, but what does “20%” actually mean? 20% is 1 in 5, so if we ran the case five times we would win one of the trials?
Obviously, the law doesn’t work like that – if you lose you can’t just have another go, and neither can you keep having trials and then take an average (presumably with the result that the defendant would be injuncted to prevent 20% of their sales and would have to pay 20% of the damages…)
And that is a key difference between infringement trials and rolling dice. The probability of throwing a six on a fair die is 1 in 6 (about 16%), and that means that if we roll the die over and over again, which clearly we can do, we would expect about a sixth of the rolls to score a six.
But making probability make sense here isn’t completely hopeless. More recently I read “How Spies Think” by David Omand, former director of GCHQ, and was comforted to know that I am not the only one to have noticed the problem. Omand says:
Analysts speak… of their degree of belief in a forward-looking judgement. Such a degree of belief is expressed as a probability of being right. This is a different use of probability from that associated with gambling games like dice or roulette, where the frequency with which a number comes up provides data from which the probability of a particular outcome is estimated.
Then he explains how to reconcile these two different “kinds” of probability:
…we think of the odds that intelligence analysts would rationally accept on their estimate being right. That is the measure of their degree of belief in their judgement.
So, if I say “there’s an 80% chance we will win”, that ought to mean that I would place a bet if a bookmaker offered me better than 1/4 on to win my client’s case. I might still lose the case and lose the bet, but if I place rationally-calculated bets on all my infringement cases, and if my probabilities of success are right, then I should make money in the long run? Perhaps.
I will point out here that I do not know of a bookmaker who would allow me to make such bets. But I suggest that if one did, they would not have too many lawyers for customers. At least, the lawyers who did bet would be doing it mostly for fun, and hopefully with money they could afford to lose. If that is right, it suggests that lawyers (and perhaps, intelligence analysts) do not, possibly cannot, quote probability by genuinely applying the test laid out by Omand.
How many lawyers measure their Brier score? I’ll stick my neck out and admit that I don’t. (Glenn Brier, inevitably given his interest in assessing the accuracy of probabilistic forecasts, worked for the US Weather Bureau).
There are people who do bet on the outcome of litigation (increasingly, this includes IP litigation), and they do so not for fun but to make money. They are litigation funders and after-the-event insurers. The fact that they are so expensive is further support for the proposition that lawyers’ quoted probabilities of success cannot be taken too seriously.
Communicating uncertainty with words
In everyday life we are used to talking about uncertain future events. “I think the weather will probably stay good for the rest of the day”, “it’s highly unlikely my old boiler will last more than a couple more years”, and so on. Lawyers too use “probability words” when advising clients – “I think it is likely we will win this one”, “there’s a real possibility this witness won’t be believed”, etc.
It turns out that quite a lot of work has been done to try to figure out what people mean when they use words like “probably”, “unlikely”, and “real possibility”. Andrew Mauboussin was certainly not the first, but he created neat online survey which you could try yourself at http://www.probabilitysurvey.com/. The result of his and others’ work, in a nutshell, is that these words when mapped to numerical probability values turn out to be used rather inconsistently.
To ensure consistent use of this sort of language, intelligence agencies have developed “yardsticks” to map words to numbers. The example below is quoted in Omand’s book, and taken from the National Crime Agency’s National Strategic Assessment of Serious and Organised Crime, 2019.
Of course, for this to work it requires all parties to the attempted communication of information to have a copy of the yardstick, or key. Not all the definitions are necessarily intuitive (to me, anyway). For example, “highly unlikely” is supposed to span from 10% – 20%, but would you really say it was “highly unlikely” that I am going to roll a six on a fair die (1 in 6 chance, or about 16%)?
How we map words to numbers inside our minds could vary not only between individuals, but also depending on the kind of question we are being asked and our experience – I suspect one reason I find it difficult to say that throwing a six is “highly unlikely” is that I have played enough board games and seen a six come up plenty of times.
Mauboussin’s firm advice is to skip the mapping step and just communicate with numbers:
For matters of importance where mutual understanding is vital, avoid nonnumerical words or phrases and turn directly to probabilities.
That is all very well, as long as the numerical probability is estimated with the rigour required to make sure it is consistent with the true, and informed, “degree of belief in a forward-looking judgment”.
The danger is that a numerical probability looks like it has some evidential and analytic basis, when really it is just a guess. Hopefully a lawyer’s advice is better than a guess, in that it reflects the lawyer’s assessment of the evidence (at least, the evidence available so far) and his or her skilled application of the facts to the law, and is an honest attempt to put the case on a scale which probably in practice runs between about 20% and 80%. That still might not be a “probability” in the true sense.
The other big problem, for lawyers at least, is that many clients seem to understand an “80% chance of success” as meaning “you are definitely going to win”. If the 80% was correct, it can lead to a seriously dissatisfied client about 20% of the time.
So what should we be doing for our clients?
I started by saying that it is perfectly reasonable for clients to expect their lawyer to be able to give them some sensible answer to “am I going to win this?”. But for the reasons explained I believe there are problems with taking the answer too seriously. So what should we be doing to make sure that clients get real value out of our advice?
I would like to propose that good legal advice should help the client to understand the issues and the risks so that the client can make their own decision.
Our clients’ commercial and domain expertise also should not be forgotten. Our clients tend to know their own businesses. In IP work, the role of the “skilled person”, the “informed user” and the “average consumer” makes this observation particularly acute. So I further propose that the assessment of legal risks should be a collaborative exercise which takes advantage of the collective expertise of the lawyer and the client.
I do not expect that either of these propositions is controversial, or particularly novel, but I think it is worth making sure that they are at the forefront of our minds as lawyers trying to provide the best value to our clients.
References / further reading
The Feynmann Lectures on Physics are available online at https://www.feynmanlectures.caltech.edu. Printed editions can also be purchased.
David Omand’s How Spies Think: Ten Lessons in Intelligence is available as a book made out of paper, an e-book, and an audiobook.
Andrew Mauboussin and Michael J. Mauboussin introduce their probability survey in the Harvard Business Review, available at https://hbr.org/2018/07/if-you-say-something-is-likely-how-likely-do-people-think-it-is. The survey itself is at http://www.probabilitysurvey.com/.
If you are inclined to wonder whether there is a ghost in your house, vehicle or workplace, you may enjoy Ghosts, available to licence fee payers in the UK on BBC iPlayer at https://www.bbc.co.uk/iplayer/episodes/m00049t9/ghosts
 At least, I know I read some of it. What is the probability that I read and understood every word? I wouldn’t like to say.