Statistics in the Laboratory: The Minimum Consistently Detectable Amount

In the previous column we discussed the limit of detection LD, the minimum signal strength above which we can be confident that analyte is present, but with a small false positive risk α that analyte is actually not present. LD is a numerical value on the y-axis of a calibration relationship.

In this column we’ll move our attention to the horizontal x-axis of a calibration relationship where we’ll find a numerical value for the minimum consistently detectable amount MCDA. This is the minimum amount of analyte for which we can be confident that analyte will be detected, but with a small false negative risk β that it won’t be detected.

Figure 1 reviews a calibration relationship between the signal strength y and the amount of analyte x. The y-intercept of the calibration relationship is the true mean of the blank  μb. The chosen limit of detection LD is also shown on the y-axis. Though not shown in the figure, the uncertainty of the measurements is expressed as the standard deviation of the blank σb.

Figure 1 – A calibration relationship showing a censored region (the black rectangle).

Let’s get ready to do a thought experiment. A signal equal to the limit of detection LD corresponds (through the calibration relationship) to a certain amount of analyte that we’ll call the limit of detection amount LDA, as shown in Figure 1. Before we do the thought experiment, though, let’s discuss the black rectangle that appears in the figure.

The black rectangle represents a “censored” region. If the signal strength isn’t greater than LD, then it won’t be concluded that analyte has been detected, and no value will be reported to the client. Thus, the client will never receive reports that list amounts of analyte that are less than the LDA. We’ll return to this later.

Now back to the thought experiment. Let’s make up a sample that contains the limit of detection amount of analyte (the LDA) and measure it repeatedly. The results are shown in Figure 2. We ask the question, “How reliably can we detect analyte when the amount of analyte is equal to the LDA?” to which we answer, “Not very.” In fact, we’ll detect it only 50% of the time. Or, to state the opposite, 50% of the time we won’t detect it. These are the false negatives: analyte is present, but we don’t say it’s present. At the LDA, the false negative risk β is 0.5. Most persons would agree that 50% isn’t very reliable detection.

Figure 2 – The results of repetitively measuring a sample containing the LDA of analyte.

So what do we have to do to get a sample that gives more reliable detection? Figure 2 suggests that we need to increase the amount of analyte in the sample. Just how much analyte we need depends on how reliably we want to be able to detect it, that is, how small do we want the false negative risk β to be? That is a question that you (the analyst) and your client must answer. Once you decide on the false negative risk β, then (for a given LD) you can calculate the MCDA.

Figure 3 – The results of repetitively measuring a sample containing the MCDA of analyte, where LD = μb + 3.3×σb,and MCDA = 2×LDA.

Figure 3 shows how this works. Let’s suppose you and your client decide that a false negative risk of 0.0005 would be appropriate, that is, if you submit a sample containing the MCDA, it will be detected 99.95% of the time [100%×(1 – β)]. That means it won’t be detected 0.05% of the time, and that will happen if the measured value y is less than the LD. Thus, the intermediate question becomes: How far above LD must the average signal for the MCDA lie so that only 0.0005 of the time a measured value y will fall below LD? Knowing that, we can determine (through the calibration relationship) what the MCDA should be.

From our discussion in the last column, we know that if we go out 3.3×σ in one direction from the center of a Gaussian curve, only 0.0005 of the area will remain in the excluded tail. That means we have to raise the mean signal for the MCDA 3.3×σb above LD. If LD is already 3.3×σb above μb (as it was in the last column), then the mean signal for the MCDA must be 6.6×σb above μb, or twice as high as LD is above μb. Looking back at the calibration relationship in Figure 1, if the MCDA is twice as high as LD is above μb, then MCDA must be twice as large as LDA.

This 2:1 relationship between the MCDA and the LDA will always occur if α = β. But there’s no fundamental reason the false positive risk should be the same as the false negative risk. Again, it is up to you (the analyst) and your client to decide appropriate values for these risks based on the application of the measurement method.

Figure 4 shows the measurement methods from the points of view of the analyst and the client. The right-pointing arrow with “Results Out” is what the client sees. If the signal strength is above the LD, then analyte will be “Confidently Detected” (with a false positive risk of α that there actually isn’t any analyte) and the value of its estimated amount will be reported. But as we discussed earlier, if the signal strength is below the LD, then analyte will not be detected with confidence, and no amount will be reported. So the client sees reported values only to the right of the point marked LDA on the horizontal axis of Figure 4.

Figure 4 – The difference between “confidently detected” and “consistently detectable.”

The left-pointing arrow with “Samples In” is what the analyst sees. If the amount of analyte in the submitted sample is above the MCDA, then analyte will be “Consistently Detectable” (with a false negative risk of β that it won’t be detected), that is, if the sample contains at least the MCDA of analyte, then there is at least 100%×(1 – β) probability that analyte will be detected.

So what’s the gap between the LDA and the MCDA? It can be a source of confusion between you (the analyst) and the client if you haven’t discussed what happens in this region. Here’s a scenario. You’ve written up your measurement method and advertise a value for the MCDA; any clients will interpret this as the method’s so-called “limit of detection.” So they conclude, “The method can’t see amounts of analyte less than this.” But then you (the analyst) start reporting values to your client between the LDA and the MCDA, and the client wonders, “What’s this? These values are below the ‘limit of detection’! How can this be?”

It happens because there’s a difference between “confidently detected” and “consistently detectable.” Think about what happens to the “consistency of detection” as the amount of analyte in the sample goes from the MCDA down to the LDA. For the values of α and β we’ve been using in this teaching example (each equal to 0.0005), the probability of detection goes from 99.95% at the MCDA down to 50% at the LDA. To take a single example, it’s like this: if you (the analyst) report to the client an amount of analyte just slightly above the LDA, you can say, “I’m at least 99.95% confident that analyte is present and this is my best guess as to its value, but if you resubmit that sample, I can’t be at least 99.95% confident that I’ll detect analyte again.” So have this conversation with your client.

In one of our short courses, a participant listened politely as we went through this material associated with Figure 4 and at the end said, “I don’t care. We never report anything below the MCDA anyway. Then we don’t have to discuss all this with our client.” I guess that’s OK if your client understands what you’re doing and agrees, but I don’t think it’s ethical to withhold potentially useful information from your client without their knowledge. It’s something to think about.

The false positive risk and the false negative risk can usually be translated into either a cost of doing business or a cost to society (or both). These risks should not be set flippantly, or solely by you (the analyst). Again, you and your client should have serious discussions about setting both of these risks.

In the next column we’ll leave the world of statistics and enter the world of ego. We’ll discuss the limit of quantitationLQ.

Stanley N. Deming, Ph.D., is an analytical chemist masquerading as a statistician at Statistical Designs, El Paso, TX, U.S.A.; e-mail: [email protected]www.statisticaldesigns.com

Related Products

Comments