Conditional Probabilities and the Use of Tables
Let's say that during a yearly medical exam your doctor informs you that your test result came back positive for a certain disease K. This serious condition isn't all that common. According to studies it afflicts 1 in 1000 people. Now the particular diagnostic test that was used is such that among individuals who are known to have disease K the test correctly identifies 95% of them to have the disease, i.e., 95% of them test positive--which also means that for the 5% the test erroneously comes out negative. Moreover, among those who are known not to have K the test correctly identifies 80% of them to be free of K--which also means that for the remaining 20% the test result incorrectly comes out positive.
Given that you tested positive for K how confident can you and your doctor be that you in fact have disease K? If instead the results had come back negative, what is the probability that you don't have the disease?
You should be more or less confident about the test because of the 95% and 80% figures, right? That is, if the results come back negative then you can be somewhere around 80% sure you don't have K, and that if the test comes back positive then you should start panicking, right?
Unfortunately, you need to do the maths to gain a firm idea.
Since we will be using the 2×2 table let's first understand what it is. The table gets its name from the fact that it has two rows and two columns for the numbers that get plugged into it. As you will see, however, the tables below will have more than just these cells simply because we'll be adding rows and colums for the totals. In the following the 2×2 table refers to the cells with a blue background, i.e., cells a to d.
|Yes/True||cell a||cell b||nr1|
|No/False||cell c||cell d||nr2|
First off we need to define what our two variables will be. In this case the existence of disease will be one of our variables, and the test result is going to be our second variable. Let's now start filling those cells with numbers. To begin with we choose an arbitrary sample size N that won't give us fractions in the cells. Let's pick a sample size of 20,000 people. Given that and the base rate (prevalence of the disease) of 1/1000 or 0.1% we can expect to find 0.001 × 20,000 or 20 persons with K in a (random) sample of 20,000 individuals, while the rest—19,980—will be K-free. Recall that the test is 95% accurate when the person being tested is known to have K (the accuracy of a positive finding given that the disease is present is known as the test's sensitivity). Thus, with the 20 persons who do have K the test will correctly show 19 of them to be positive for K (0.95 × 10) while the test will incorrectly show 1 of them to be negative for the disease (0.05 × 20). We are also told that 80% of the time the test correctly comes out negative when those being tested are known to be free of K (the accuracy of a negative finding given that the disease is absent is known as the test's specificity). In our sample size 19,980 don't have K. Hence, if they're given the test we can expect 15,984 will correctly test negative (0.80 × 19,980), while the test will incorrectly come out positive for 3,996 individuals (0.20 × 19,980). Plugging all those into our table we have:
|base rate = 0.1%
sensitivity = 95%
specificity = 80%
What you and your doctor are interested in is how confident you can be that you have K given that you tested positive. In order to answer that question we need to look at the data in the first row. In particular we are interested in the relative frequency of having K given that you test positive for it. To get that we simply divide the number of people who test positive and have K by the total number of people who test positive: 19 / 4,015 = 0.47%. This means that if the test comes out positive there's only a 0.47% chance that you have disease K. In other words, there's a 99.53% chance (100% - 0.47%) that you're not sick at all!
What if the test comes back negative? How confident can you be that you don't have K? To find out we look at the numbers in the second row. This time we're interested in being free of K given that we tested negative for it. To obtain that figure we divide the number of people who test negative and who don't have K by the total number of people who test negative: 15,984 / 15,985 = 99.99%. This means you are almost absolutely certain that you don't have K if the test comes back negative.
Now let's toy around with the base rate (the relative frequency of the disease in the population) and see how it affects the results. What if K is much much more common than was previously thought? What if new studies show the base rate to be 10% (1 in every 10 has it)? Again using 20,000 for N the computed data we obtain is as follows:
|base rate = 10%
sensitivity = 95%
specificity = 80%
With an increase in the base rate our confidence in the test shoots up. Should you test positive you would be 34.5% certain you have K (1,900 / 5,500). But then again it still is more probable that you don't have K (100% - 34.5% = 65.5%). Meanwhile, your confidence in a negative result would hardly change. Even with a hundredfold increase in the base rate you still can be very sure that if you test negative then you are indeed K-free: 14,400 / 14,500 = 99.3%
To illustrate the importance of this method in objectively assessing various diagnostic tests, let's use a real-life example. One of the most feared diseases is cancer and among women breast cancer easily tops the list. Some 25 years ago Dr. Charles Rogers was among the few surgeons who treated women who had a high risk of developing breast cancer. His treatment was straightforward: remove the high-risk breast before cancer is actually detected. Known as prophylactic mastectomy Rogers had at that time performed the procedure on 90 women. A newspaper article reported that the research on which this procedure is premissed showed that around 1 in 13 women develop the cancer. The mammogram that was used to assess the risk factor had a 92% sensitivity, meaning among those in whom cancer eventually developed only 8% are incorrectly diagnosed as being low risk. Moreover, 57% of the general population of women were deemed to be at high risk.
From these numbers it looks as if there is a good basis for doing prophylactic mastectomy, while the fact that over half of the total population are considered high risk would give women panic attacks. Again, let's tabulate the data and perform our calculations to see the real score. Let's arbitrarily set our sample size N to 1,000. The article tells us that 57% are diagnosed as being at risk. That translates into 570 women (57% × 1,000), while 1,000 - 570 = 430 of women turn out to have a low risk factor. Our base rate is 1 in 13 or 7.7%, which means 7.7% × 1,000 = 77 women actually develop breast cancer, while 1,000 - 77 = 923 won't, regardless of the mammogram result. The number of women who test high and eventually do get breast cancer is 71 (92% × 77, then rounded off). Plugging these into our table we can derive the figures for remaining cells.
|base rate = 7.7%
sensitivity = 92%
specificity = ?
So what is the probability that those whose mammograms identify them as being at risk actually go on to develop cancer? 71 / 570 or just 12.5%. In other words, 87.5% (100% - 12.5%) of those identified as high risk will not develop breast cancer. Thus, the rationale for prophylactic mastectomy is more than highly questionable (unless other diagnostic tests could provide additional supporting evidence that would greatly increase our confidence that cancer will in fact develop).
The specificity of the mammogram isn't anything to be optimistic about: it's only 46% (424 / 923). Its performance is little better than a coin toss. But while specificity is a value researchers may be interested in, we and our doctors are only interested in knowing the probability of not developing cancer given that the mammogram shows a person to be at low risk. And it turns out that patients who're tagged as low risk can heave a sigh of relief: that probability is nearly 99% (424 / 430).
Bottom line is, given a low base rate--implying an uncommon or rare disease--a test is pretty definitive for those who test negative, while those who test positive should not prematurely grieve. Nevertheless, since we are medically risk-averse, it would do us well to look into the matter further should we be among those in the high risk group.
In the tables above what is most important to us are the probabilities obtained by examining the rows. These give us the probability that we have/don't have or will have/won't have the said disease given a particular test result.
One very important lesson we can learn here is that the probability of Y given Z is not the same as the probability of Z given Y. The probability of being sick with disease Y given test result Z is not the same as getting test result Z given that someone who already is known to have disease Y. As we've seen in our last example the probability of being at risk given that we eventually develop cancer is 92%. But the probability of developing cancer given that the mammogram shows us being at risk is only 12.5%.
That inverse conditional probabilities do not yield the same results becomes rather intuitive when we consider some commonplace examples: The probability that a driver figures in a vehicular mishap given that he's drunk is certainly not the same as the probability that a driver is drunk given that he's been in a mishap. It "makes sense" to us that the former will most likely be higher than the latter. Just one other example to illustrate what we intuitively understand. The probability that from a sample of males a randomly chosen man is 70 years of age is, we can safely say, smaller than the probability that from a pool of 70-year olds, a randomly selected individual is male (probability for the latter is around 50%).
Just to provide a bird's eyeview and make things crystal clear, the table below lists all the various possible conditional probabilities for the 2 × 2 table that we've been using.
|Variable 1||Conditional Probabilities|
|Yes/True||a||b||nr1||p(V1 | V2) = a / nr1||p(V1' | V2) = b / nr1|
|No/False||c||d||nr2||p(V1 | V2') = c / nr2||p(V1' | V2') = d / nr2|
|Conditional Probabilities||p(V2 | V1) = a / nc1||p(V2 | V1') = b / nc2|
|p(V2' | V1) = c / nc1||p(V2' | V1') = d / nc2|
Vx denotes the case when the variable is true and Vx' the case when the variable is false (the apostrophe mark is read "prime"). The vertical bar denotes a conditional probability, so p(V1 | V2) means "the probability of V1 given V2."
Rather than taking an arbitrary N and solving for frequencies, we can use the probability values directly. The following table shows the values that get plugged into the various cells. For more information on the basics of probability see Introduction to Probability.
|Event||B||P(A ∩ B)||P(A' ∩ B)||P(B)||P(A|B) = P(A ∩ B)/P(B)||P(A'|B) = P(A' ∩ B)/P(B)|
|B'||P(A ∩ B')||P(A' ∩ B')||P(B')||P(A|B') = P(A ∩ B')/P(B')||P(A'|B') = P(A' ∩ B')/P(B')|
|Conditional Probabilities||P(B|A) = P(A ∩ B)/P(A)||P(B|A') = P(A' ∩ B)/P(A')|
|P(B'|A) = P(A ∩ B')/P(A)||P(B'|A') = P(A' ∩ B')/P(A')|
Let's now use the above table to look for the probability of having the disease given the diagnostic test result. From the definitions,
- Sensitivity = probability of testing positive/high given that the disease is present or that it will occur
- Specificity = probability of testing negative/low given that the disease is absent or that it will not occur
- base rate = probability of the disease in the population
S = sensitivity
C = specificity
R = base rate
T = test positive/high
T' = test negative/low
D = disease present/will occur
D' = disease absent/won't occur
S = P(T|D)
C = P(T'|D')
R = P(D)
We are looking for P(D|T), the probability that disease is present/will occur given that the test is positive/shows high risk
|Test Result||T||P(D ∩ T)||P(D' ∩ T)||P(T)||P(D|T) = P(D ∩ T)/P(T)||P(D'|T) = P(D' ∩ T)/P(T)|
|T'||P(D ∩ T')||P(D' ∩ T')||P(T')||P(D|T') = P(D ∩ T')/P(T')||P(D'|T') = P(D' ∩ T')/P(T')|
|Conditional Probabilities||P(T|D) = P(D ∩ T)/P(D)||P(T|D') = P(D' ∩ T)/P(D')|
|P(T'|D) = P(D ∩ T')/P(D)||P(T'|D') = P(D' ∩ T')/P(D')|
From the conditional probabilities equations in the table above we see that sensitivity P(T|D) = P(D ∩ T)/P(D). Therefore, the value for cell a is P(D ∩ T) = P(D)·P(T|D), i.e., base rate × sensitivity or R·S. Likewise, the value for cell d is P(D' ∩ T') = P(D')·P(T'|D') = (1 - R)C. Given those, the values for the all the other cells can be found. The following table uses the K disease example (base rate = 1/1000). As you will see, the values for the cells are simply the original values divided by N = 20,000.
|base rate = 0.1%
sensitivity = 95%
specificity = 80%
Lastly, instead of resorting to tables we can derive a formula for P(D|T) given sensitivity, specificity, and base rate.
From the complement rule
P(D') = 1 - P(D) = 1 - R
Let R' = P(D') = 1 - R
From the multiplication rule
P(D ∩ T) = P(D)·P(T|D)
P(D' ∩ T') = P(D')·P(T'|D')
From Rule A1
P(D') = P(D' ∩ T) + P(D' ∩ T')
P(D' ∩ T) = P(D') - P(D' ∩ T')
From the same rule
P(T) = P(D ∩ T) + P(D' ∩ T)
From the multiplication rule
P(D ∩ T) = P(T)·P(D|T)
P(D|T) = P(D ∩ T)/P(T)
P(D|T) = P(D)·P(T|D) / [P(D and T) + P(D' and T)]
P(D|T) = P(D)·P(T|D) / [P(D)·P(T|D) + P(D') - P(D')·P(T'|D')]
P(D|T) = R·S / [R·S + (1 - R) - (1 - R)(C)]
P(D|T) = R·S / [R·S + (1 - R)(1 - C)]
So we can use the above formula to solve for P(D|T) instead of creating a table. Unfortunately, it won't be applicable in situations such as in the Rogers cancer example since in that vignette what is given is P(T) and not specificity. On the other hand, given P(T) the formula for computing P(D|T) is actually much simpler: P(D|T) = R·S / P(T). I leave the derivation to the reader.
- Dawes, Robyn M. 2001. Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail To Think Rationally. Boulder, CO: Westview. p. 75-79, 86-87.
- Ruscio, John. 2002. Clear Thinking with Psychology: Separating Sense from Nonsense. Pacific Grove, CA: Wadsworth. p. 154-158.