Dynamic Chiropractic

Dynamic Chiropractic Facebook Twitter
Dynamic Chiropractic
Find
Advanced Search
Wellness Blog
Dynamic Chiropractic
Dynamic Chiropractic PracticeINSIGHTS
Current Graphic
Facebook

More and More on Diagnostic Testing

In my last blog post I discussed the importance of assessing the sensitivity and specificity of a diagnostic test. We asked the question: how well do our diagnostic tests help identify those with (sensitivity) and without (specificity) a condition of interest? I defined sensitivity as the ability of a diagnostic to correctly identify those with a target disorder, and specificity as the ability of a diagnostic test to correctly identify those without the target disorder. And I showed how easy a calculation it is: if we had 100 people with the condition of interest and the test was positive in 90 of them, the sensitivity would be 90%. Note, though, that there would be 10 people with condition who tested negative, or a false negative rate of 10%.We can do the same kind of calculation for specificity, and find as well a false positive rate.

The reader will note that I have shied away from presenting this as a mathematical calculation. Traditionally, sensitivity and specificity are calculated from 2×2 contingency tables

  Patients with the condition Patients without the condition Totals
Patients who test positive a b a+b
Patients who test negative c d c+d
Totals a+c b+d a+b+c+d

 

Sensitivity would be, mathematically, a/(a+c). Specificity would be d/(b+d). But forget trying to remember even this simple calculation; think about it conceptually: if sensitivity is a measure of how well a test identifies someone with a target disorder, then it is simply the percentage of people who test positive out of all who have the condition.

But this table has some interesting elements to it. There is another question we can ask: of all the people who test positive, how many have the condition? That is, how accurate is our test? How well does it predict who has or does not have the condition? This is known as predictive value, and we can assess both a positive predictive value (how many people who test positive actually have the condition) and a negative predictive value (how many of those who test negative actually do not have the condition). These are often simply referred to as PPV and NPV.

Thus, to assess this, if we consider this conceptually, all we need to do is figure out the percentage of all those who test positive who really have the condition. Thus, if we had 100 people who tested positive, out of which 90 really do have the condition, we would have a PPV of 90%. And if we had 100 people who tested negative, from which 90 really do not have the condition, we would have a NPV of 90%. And if we wanted to calculate it from the 2×2 contingency table, PPV=a/(a+b), while NPV=d/(c+d). No problem! A low PPV suggests that our diagnostic test may not be as strong for ruling in a disorder as it for ruling it out.

Using sensitivity, specificity, PPV and NPV in concert you can get “a complete picture of the trustworthiness of a give diagnostic test and add that information to your clinical judgment.” (1)

But there is more to this picture…

References

  1. Howlett B, Rogo EJ, Shelton TG. Evidence-based practice for health professionals: an interprofessiional approach. Burlington, MA; Jones and Bartlett, 2014:242

Leave a Reply