Mediation

How to calculate your chances of winning at trial

By
Bruce Greig
on
May 5, 2021

Lawyers need to be able to understand the basics of probability to confidently predict outcomes at court. This post covers everything you need to know.


This topic is one we cover in my Maths for Smart Lawyers course - how to correctly combine outcome forecasts for different aspects of your case into one overall forecast. You might have a couple of different arguments to put forward, one you are very confident about, the other weaker but still worth advancing. How do you assess your overall chance of success?


The basics that you might remember from maths at school


To work out the probability of either one event or another happening, you add the probabilities together. To work out the probability of both events happening you multiply them. You probably remember something of this from school maths.


So if you have one roll of a dice (“die” for the pendants), the chances of rolling a 1 is ⅙. The chances of rolling a 2 is also ⅙. The chance of rolling a 1 or a 2 is 2/6 = ⅓. This is straightforward. You have 2 possible wins out of 6 possible outcomes, so 2/6 makes sense.


The other school maths example deals with rolling the die twice. What are the chances of rolling two sixes? That’s ⅙ x ⅙ = 1/36. Again, this is easy to check in your head. There are 36 possible outcomes (you can count them: 1 and 1, 1 and 2, 1 and 3, etc.). So the chance of getting one of those possible outcomes is 1/36.


This is all straightforward. A OR B, you sum the probabilities. A AND B, you multiply.


But when you try to apply this in the courtroom, it doesn’t seem to work


How about this: you propose two arguments to defend your client in court. Either one will get them off the hook. Let’s say you argue that the other side’s claim should be denied because it is out of time. And, in the alternative, you say the contract doesn’t cover the matter claimed anyway (“out of scope”).


You estimate the ‘out of time’ argument has a 50% chance of success. And the ‘out of scope’ argument also has about 50% chance of success.


What is your overall chance of success?


Either outcome works, so this is an “OR” scenario, right? So you might try to add them together. But 50% + 50% = 100%. That’s certainty. That can’t be right. You can’t be guaranteed to win just by offering two 50:50 arguments. What if you had a third argument to make, also with 50% chance of working? Would that mean you had 150% chance of success? Something’s not right there.


How about you multiply them? 50% x 50% = 25%. That doesn’t seem right either. If one argument alone has a 50% chance of winning, having a crack with a second argument surely cannot lower your chance of winning.


You do need to add the probabilities together, but then subtract the probability of them both occurring. Why didn’t you have to do that with the school dice example earlier? Well, you did need to, but the probability of both events occurring was zero. You can’t throw a 1 and 2 with a single throw of the dice. So it doesn’t matter, subtracting zero from your answer gives you the same answer. But you can persuade the judge on both your ‘out of scope’ and ‘out of time’ arguments.


So the chance of winning either one, or both, is actually 50% + 50% - (50% x 50%) = 75%.


Note that we are not discarding the possibility of both succeeding. That’s not why we subtract ‘both’. It is because we will double-count the possibility of both succeeding, unless we subtract it. I’ve covered this in more detail below, using a playing card example, if you’d like to see exactly what is going on here.


Look at this another way. What are the chances of both arguments failing? One failing = 50%. The other failing = 50%. So both failing = 50% x 50% = 25%.


Now then, as long as you avoid having both of your arguments fail, you win. If the probability of both failing is 25%, then the probability of that outcome not occurring = 1 - 25% = 75%. Therefore you have a 75% chance of succeeding, one way or another.


Taking the opposite probability (which mathematicians call the “complement”)  is usually easier than doing the sum, less the double-counting. Because if you have more than two arguments, it gets tricky to calculate what the double-counting is. But the complement method always works, as long as all the events are independent of each other (i.e. one argument does not depend on the other argument).


For example, you have three different arguments, with probability of 30%, 35% and 60% respectively.


The long-hand way would be:


P(A) + P(B) + P(C) - P(A ∩ B) - P(A ∩ C) - P(B ∩ C) + P(A ∩ B ∩ C)


0.3 + 0.35 + 0.6 - (0.3 x 0.35) - (0.3 x 0.6) - (0.35 x 0.6) + (0.3 x 0.35 x 0.6) = 0.818


The various adding and subtracting of different combinations of probabilities is to eliminate any double-counting. As you can see, even with just three probabilities, you get a very drawn-out calculation.


The shorter way is to work out the probability of none of these events occurring. The probability of your 30% argument not working is 70%, and so on, giving you:


0.7 x 0.65 x 0.4 = 0.182


Then take the complement of that: 1 - 0.182 = 0.818


That’s the same answer as the more convoluted calculation earlier.


A bit more detail on why you have to subtract some combinations, to avoid double-counting


What’s going on here? Why are we having to subtract some combinations of outcomes?


The principle is best illustrated with a deck of cards.


Consider a deck of 52 playing cards. Let’s say the judge is deciding your case by surreptitiously drawing a card from the deck. The more of ‘your’ cards there are in the deck, the greater your chance of winning.


Let’s say that even numbers represent one argument you are putting to the court. If the judge draws an even number, that means he’ll say he’s been won over by that particular argument and make a decision in your favour. There are 20 even numbered cards, so you have a 20/52 = 38% of the judge drawing an even numbered-card from the deck.


And let’s say that red cards (hearts and diamonds) represent some other argument. There are 26 red cards, so you have a 26/52 =50% chance of the judge drawing a red card.


Now then, what is the chance of the judge drawing a card which is even numbered, or red, or both?


If you just add up the two probabilities (20/52 + 26/52) you’d get 46/52 = 88%. But there are not 46 possible cards which meet your criteria. If you count 46, you are double counting the red even numbers:




There are actually just 36 individual cards which work for you: 26 reds, plus ten even-numbered blacks. You don’t count the even-numbered reds, because you’ve already counted them when you counted the reds.


That’s why you have to subtract the odds of success at both ‘out of scope’ and ‘out of time’ arguments, because you have already accounted for both happening in your estimates of one happening.


Here is all that laid out the long way around:


Probability of ‘red or even’ = 20/52 + 26/ 52 - (26/52 * 20/52) = 36 (the same number you get if you count the individual cards which are either red, even or both).


And doing it by taking the ‘not win’ calculation:


Not red = 26/52

Not even = 32/52

Not red and not even = 26/52 x 32/52 = 0.31

Everything else = 1 - 0.31 = .69

0.69 * 52 = 36 cards


What about expected value?


Expected value is a term used to represent the overall outcome you should expect taking into account the probabilities involved. If you are offered a coin toss, with a prize of £100 for heads and nothing for tails, the expected value of that offer is £50. You have a 50% chance of winning £100, so the expected value is £50.


If you are offered a prize of £100 for heads, but you lose £10 if you turn up tails, then the expected value is 50% of £100 less 50% of £10, so expected value of £50 - £5 = £45.


If you ran that coin toss a hundred times, you’d expect about 50 heads = £5,000. And 50 tails = £500 loss. Overall you’d be £4,500 up, i.e. £45 per coin toss.


You can run the same calculation on your litigation estimates. Your client will very often be exposed to some cost if the case doesn’t go their way.


If you put your chances of winning at 60%, and a win means a £250k award for your client, then your expected value is 60% x £250 = £150k.


But you need to subtract from that the expected loss if you lose. Let’s say they will have a net loss of £375k if they lose. There was a 60% chance of winning, so a 40% chance of losing. 40% x £375k = £150k loss.


Adding those two together gets you £150k - £150k = £0. Zero. A 60% chance of winning £250k, balanced by a 40% chance of losing £375k makes this case probably not worth pursuing.


Of course, for an individual client, there is no outcome where they get £zero. They will either get £250k, or lose £375k. For you as a lawyer, running many similar cases, the concept of expected value works fine. For an individual client, things will look different. They probably don’t get to run the case many times to average out their position. They only get one shot, and their appetite for risk will affect their decision to proceed. They might be bankrupt without the £250k, and bankrupt anyway with the £375k loss. There might be other considerations in play, too, like the need to set a precedent, or dissuade other claims, or the old chestnut of ‘having their day in court’.


The calculations quickly get more complicated once you take into account exposure to legal fees, and factor in part 36 offers. But the principles are always the same: calculate the probability of a certain outcome. Calculate the loss or gain associated with that outcome. Multiply the probability by that loss or gain. Then repeat for each outcome, and add them all up, and you will get a figure which represents the overall expected value of that claim.


You can repeat the exercise for what you think the other side will be estimating. If those two expected values are close together, then it is likely that a negotiated settlement will be easy to achieve.


To learn more about this, and more maths topics which are useful for lawyers, come along to my 2 hr CPD session, Maths for Smart Lawyers