25.10.06

Medical Research and Signal Detection Theory

On my way home this afternoon, I heard an interesting story on NPR about a new medical study concerning a new and exceptionally effective lung cancer screening technique. The story was interesting for two distinct, though related, reasons: it can be used to illustrate the utility of signal detection theory, and it is a rare example of accurate (and precise) media coverage of scientific research.

Signal detection theory's utility resides both in its ability to tease sensitivity and decision bias apart and in what it tells us about how they relate. For a given level of sensitivity, making your decision criterion more liberal will increase both the probability of accurately detecting a signal that is, in fact, present (i.e., your 'hit' rate) and your probability of inaccurately 'detecting' a signal that isn't (i.e., your 'false alarm' rate), while making your decision criterion more conservative will have the opposite effect. Conversely, for a given decision criterion (defined in terms of hit rate), increasing sensitivity will lower the false alarm rate while decreasing sensitivity will increase it.

How does this relate to the study discussed in the NPR story linked above? The study presents a new, more sensitive test for early cases of lung cancer. This higher level of sensitivity will enable doctors to detect many more cases of lung cancer much earlier than they could before, which has two effects. More lung cancer cases caught early could lead to more lung cancer cases treated successfully and more misdiagnosed false alarms and inappropriate, expensive, and stressful treatment.

Now, signal detection theory tells us that, at least in principle, sensitivity and decision bias are independent. In fact, there is a lot of experimental evidence that this is the case. For example, you can systematically shift peoples' decision criteria around by manipulating the relative frequency of occurence of signal presence versus signal absence or the relative value of each type of response. Nonetheless, in a 'real world' situation like this, in which the stakes can be very high, decision bias and sensitivity can interact heavily.

The old (i.e., standard) tests are very insensitive to early lung cancer. Extreme insensitivity to the early stages of lung cancer precludes the utility of an adjustable decision criterion. Only relatively conclusive evidence of lung cancer even offers grounds for making a decision to get treatment or not. Now that a rather sensitive test is available, doctors are, in principle, free to set their decision criteria wherever they want. Hence, understanding the relationship between accurately catching and treating early cases and inaccurately mistreating non-cases becomes very important.

How does this relate to accurate (and precise) media coverage of a research issue? The NPR report does a good job of reporting these issues, which seems to me to be unusual in science reporting. There are those on the 'pro-hit' side who take this study to indicate that lung cancer is on par with other forms of cancer that have become very treatable, and there are those on the 'anti-false-alarm' side who warn of the danger of, well, false alarms. While I don't believe that balance for balance's sake makes for good reporting, in this case balance is appropriate. The relationship between hits and false alarms makes that clear.

The report also does discusses a methodological limitation of the study, namely that the lack of a control group severly limits what this study tells us about the efficacy of early diagnosis and treatment of lung cancer. Again, this attention to detail with regard to research is unusual in the media.

Whence 'precision'? All this in less than five minutes of audio.

No comments: