Sunteți pe pagina 1din 6

86 Journal of The Association of Physicians of India ■ Vol.

65 ■ July 2017

STATISTICS FOR RESEARCHERS

Statistical Evaluation of Diagnostic Tests –


Part 2 [Pre-test and post-test probability and
odds, Likelihood ratios, Receiver Operating
Characteristic Curve, Youden’s Index and
Diagnostic test biases]
NJ Gogtay, UM Thatte

Introduction Understanding Probability From the example, it follows that


and Odds and the Odds = p/1-p, where p is the

I n the previous article on


the statistical evaluation Relationship between the
probability of the event occurring.
Probability, on the other hand,
of diagnostic tests –Part 1, we Two is given by the formula
understood the measures of
sensitivity, specificity, positive Let us understand probability p = Odds/1+Odds
and negative predictive values. and odds with the example of
The use of these metrices stems a drug producing bleeding in Bayesian Statistics, Pre-
from the fact that no diagnostic 10/100 patients treated with it. Test Probability and Pre-
test is ever perfect and every time The probability of bleeding will Test Odds
we carry out a test, it will yield be 10/100 [10%], while the odds of
one of four possible outcomes– bleeding will be 10/90 [11%]. This A clinician often suspects that
true positive, false positive, true is because odds is defined as the a patient has the disease even
negative or false negative. The 2 x probability of the event occurring before he orders a test [screening
2 table [Table 1] gives each of these divided by the probability of or diagnostic] on the patient. For
four possibilities along with their the event not occurring. 2 Thus, example, when a patient who is
mathematical calculations when a every odds can be expressed as a chronic smoker and presents
new test is compared with a gold probability and every probability with cough and weight loss of a
standard test. 1 as odds as these are two ways of six-month duration, the suspicion
In this article, the second in explaining the same concept. of lung cancer has already entered
the diagnostic test series, we will Table 1: A 2 x 2 table of depicting the results of a new test vis à vis a gold standard
discuss single summary statistics test
that help us understand and use Posi�ve predic�ve value = a/a +b
these tests appropriately both
in the clinical context and when Disease
these summary statistics appear
in literature. Before we discuss Test Present Absent

these, we need to recapitulate a Sensi�vity = a/a +c Specificity = d/b+d


few concepts presented in earlier Posi�ve True Posi�ve [TP] a False posi�ve [FP] b a+b +c

articles [odds and probability] and


also some novel concepts [Bayesian Nega�ve False Nega�ve [FN] c True Nega�ve [TN] d c+d

statistics, pre-test and post-test


probabilities and odds]. Nega�ve predic�ve value = d/d +c

Department of Clinical Pharmacology, Seth GS Medical College & KEM Hospital, Mumbai, Maharashtra
Received: 06.05.2017; Accepted: 10.05.2017
Journal of The Association of Physicians of India ■ Vol. 65 ■ July 2017 87

the physician’s mind. Thus, the Mathematically, this is calculated Likelihood ratio Sensitivity [TP]
=
clinician has already, mentally, as [positive] LR+ 1 -Specificity [FP]
identified some “pre-test” probability Pre-test probability = while a negative Likelihood ratio
of the patient having the disease; is given by
Number of patients with
lung cancer in this case.
complaints actually diagnosed to Likelihood ratio 1- Sensitivity [FN]
Clinical decision-making, by have the disease [negative] LR -
=
Specificity [TN]
and large, requires a combination
Total number of patients who Let us understand this with
of clinical acumen along with a
present with the same complaints an example. When physical
correctly performed and interpreted
screening or diagnostic test. When [In this case, it would be 60/100 examination is carried out in patients
the physician allocates a “pre-test or 60%]. with suspected acute appendicitis,
probability”, what he is applying is Pretest odds, however, would there-is-rebound tenderness at or
a field of statistics called Bayesian be 0.6/0.4 or 1.5 (the probability about the McBurney’s point, pain on
statistics. Herein, the knowledge of of the event occurring divided by percussion, rigidity, and guarding.
prior beliefs is used and quantified the probability of the event not The positive likelihood ratio for the
as a numerical value ranging from occurring). diagnosis of appendicitis would be
0 -100%. 3 This value is then used for the ratio of those with appendicitis
The clinician next orders a test,
subsequent calculations. Bayesian who have tenderness at McBurney’s
which he hopes, will confirm [or
statistics allows us to interpret point [sensitivity] by those without
refut e] his diag nosis. The t est
screening and diagnostic tests in appendicitis who have tenderness
result and the pre-test probability
their clinical context. at McBurney’s point [falsely
together will now be used to
positive or 1- specificity]
Logically, the next question calculate the post-test probability
would be - what are the ways in as described below. OR
which these pre-test probabilities Likelihood ratio [positive] LR+
can be allocated? These are listed Post-test Probability and The number of patients with
below Post-Test Odds appendicitis who have localized
• Subjectively based on informed tenderness at the McBurney’s point
Since the result of a diagnostic
opinion, consensus guidelines The number of patients without
t e s t c a n b e e i t h e r p o s i t i ve o r
or experience in treating the appendicitis who have localized
negative, post-test probabilities
disease in question tenderness at the McBurney’s point
are either positive or negative.
• A n u n d e r s t a n d i n g o f t h e Mathematically, The negative likelihood ratio
evolution of the disease and LR- would be
• Post-test probability = Pre-test
matching it with how the
probability x Likelihood ratio The number of patients with
disease has actually evolved
(see below for explanation), appendicitis who don’t have localized
in the patient
while tenderness at the McBurney’s point
• Objectively based on available
• P o s t - t e s t o d d s = P o s t - t e s t The number of patients
evidence [prevalence data for
probability/1 – post-test without appendicitis who don’t
example]
probability have localized tenderness at the
In the example presented, the McBurney’s point
treating physician may assign a The Likelihood Ratio If we were to express both these
pretest probability of 60% or even
[A Summary Statistic] mathematically, based on the 2
higher based on his clinical acumen
x2 table, these would be as given
and what he sees in practice. Likelihood ratios [LR] combine below
How is this calculated? Let us say both sensitivity and specificity
that the clinician is a lung cancer Likelihood ratio positive or LR +
into a single measure and are an
specialist and he sees 100 patients alternate way of evaluating and The probability of obtaining a
in three months who are chronic interpreting diagnostic tests.4 positive test result in patients with
smokers with persistent cough They help in making a choice of a disease [TP]
and weight loss. Sixty of them diagnostic test or sequence of tests. The probability of obtaining
eventually return a diagnosis of LR essentially tell us how many a positive test result in patients
lung cancer based on one more times more [or less] a test result is without the disease [FP]
tests. The pretest probability for a to be found in diseased compared On the other hand, a negative
new patient with a similar history to non-diseased people. LRs are of likelihood ratio or LR- would be
and complaints who presents to two types – positive and negative.
him in the fourth month would A positive Likelihood ratio is given
thus be 60%. by
88 Journal of The Association of Physicians of India ■ Vol. 65 ■ July 2017

The probability of obtaining a of 5% that we use routinely use to of lung cancer. In other words,
negative test result in patients with check for statistical significance of the test is “positive”. Literature
disease [FN] a LR that is calculated. tells us 7 that low dose CT has an
The probability of obtaining approximate sensitivity of 80%
a negative test result in patients
Clinical Application and a specificity of 90%. Thus, the
without the disease [TN] – putting Together positive likelihood ratio would be
Since different tests for the same Probability, Odds and the • L R + = S e n s i t i v i t y [ . 8 ] / 1 -
specificity [1-0.9] = 8 [this LR
disease have different sensitivities Likelihood ratio
and specificities, each test would + indicates that the test result
yield a different likelihood ratio Having understood the concepts is more likely in someone with
for the same disease. Let us of probability and odds, pre-test lung cancer than someone
understand this with an example. and post-test probabilities and the without]
The diagnosis of prostate cancer likelihood ratios we need to put all We now calculate the post-test
can be made by both digital rectal of them together to see how they odds as pre-test odds x likelihood
examination [DRE] and Trans rectal actually help in clinical decision ratio
ultrasonography [TRUS]. Manyahi making; the sequence for which is • Thus, post-test odds = 1.5 x 8 =
JP and colleagues 5 in their study given below 12
found the sensitivity of DRE to • Calculate Pre – test probability F i n a l l y , we wa n t t o c o n ve r t
be 66.7%, and the specificity to be (p) the post-test odds into post-test
88.6%. The values for TRUS were
• Derive Pre- test odds as p/1-p probability
58.3% and 85.7% respectively. The
LR + for DRE thus would be 5.8 • Conduct the test [screening • i.e., 12/1 + 12 = 12/13 0r 0.92
[.667/1-.886], while that for TRUS or diagnostic] with an or 92% [indicating a high
would be 4.1[.583/1-.857]. The LR- appreciation of its sensitivity probability that the patient
for the two tests similarly would and specificity has lung cancer ]
be 0.38 [1-.667/.886] and 0.49 [1- • See the result – positive or What if the CT scan results had
.583/.857] respectively. negative been negative?
LRs range from 0 to infinity. LRs • C a l c u l a t e P o s t - t e s t o d d s = Here, the pre-test probability
more than 1 argue for the presence Pre-test odds x Likelihood of 0.6 and the pre-test odds of 1.5
of the disease and numbers further ratio [a positive LR is used for would have remained unaltered.
away from 1 strengthen this a positive test and vice versa] However, we would now need to
argument. They, thus rule in the • Calculate Post-test probability calculate the negative LR or LR-
disease. LRs between 0 and 1 argue = Post-test odds/(1+ post-test Negative Likelihood ratio [LR-]
against the diagnosis of interest. odds) = 1- sensitivity/specificity
Values of 1 or close to 1 indicate
• Make a decision regarding the • Or 1-0.8/0.9 = 0.22
that the test may lack diagnostic
diagnosis Now, the post-test odds would
value. LR- values below 1 indicate
that the result is likely to associated Let us understand this with be pretest odds x LR-
with the absence of the disease. 4 the same hypothetical example. • Or 1.5 x 0.22 = 0.33
Let us say that a 60-year old male
While LRs are good measures Post-test probability would be
p a t i e n t w i t h 2 0 p a c k ye a r s o f
of diagnostic accuracy, these are 0.33/1 + 0.33 = 0.25 or 25% [a
smoking presents with cough and
seldom used in clinical practice much lower probability of the
weight loss of 6 months’ duration.
as they require a knowledge patient having lung cancer ]
The treating physician knows
of probabilities and involve
from literature that the pre-test Based on these single summary
calculations. Nomograms such
probability of lung cancer is 60% in statistics [92% or 25%], the
as the Fagan’s nomogram
those with 20 pack years or more in physician will take the next steps
[https://mclibrary.duke.edu/
the 50-75 age group. towards management. However, as
sites/mclibrary.duke.edu/files/
• Thus, pre-test probability = 60% stated earlier, because LRs involve
public/guides/nomogram.pdf ] are
or 0.6 tedious calculations that include
available 6 for ease of use of LRs,
conversion of odds to probabilities
but may not always be available We n o w c o n v e r t p r e - t e s t
and thus are rarely used in clinical
for a quick bedside diagnosis. The probability into pre-test odds
practice.
logarithm of the likelihood ratio • Pre-test odds = 0.6/ 1-0.6 or
[log likelihood ratio statistic] is 0.6/0.4 or 1.5
used to compute a p value and then
We now conduct a CT scan [low
compared with the critical p value
dose] which returns a diagnosis
0.8-0.9 Good
0.7-0.8 Fair
0.5-0.7 Poor
Journal of The Association of Physicians of India ■ Vol. 65 ■ July 2017 89
Fig. 1: A typical Receiver Operating Characteristic Curve and its components

Table 2: Area under the ROC curve


and interpretation of the
diagnostic accuracy of the
test9
Area under the ROC Interpretation of the
curve test accuracy
1 Perfect
0.9-1 Excellent
0.8-0.9 Good Worse performance of test in predicting
0.7-0.8 Fair presence or absence of disease
0.5-0.7 Poor

Receiver Operating
Characteristic [ROC] Curve
and its Interpretation
[Reproduced with permission from Indian Pediatrics]10
The ROC curve is a plot of the Fig. 1: Aoftypical
Applications the ROCreceiver operating
curve- Any ROC curvecharacteristic
helps serve the curve and
following itspurposes
four components
[10]
sensitivity or true positive rate on
the y-axis and 1 minus Specificity the ROCa.curve,
Findingis
thetaken as 1least
cut off that and furthest
misclassifies away
diseased and from individuals
non-diseased the line of
or the false positive rate on the is a useful metric the
b. Assessing fordiscriminatory
evaluating equality
ability of the test [the diagonal line] and
x-axis. Figure 1 depicts the various the performance
c. Comparingof theadiscriminatory
test. The ability of maximizes
two or morethe difference
diagnostic tests forbetween
assessing the
components of the ROC curve and same disease
closer the value of the AUC is to the sensitivity [true positivity
these are described below. 1, the better is the discriminatory rate] and the false positivity rate
The point where the x and ability of the test [Table 2 and [1-specificity]. 10,11 It is calculated
y axis begin [0,1] depicts 0% Figure 1]. Since the curve is based by deducting 1 from the sum of
sensitivity and 100% specificity. on the metrics of sensitivity and the test’s sensitivity and specificity
Both sensitivity and specificity are specificity alone, the ROC curve is expressed not as percentage but
0 [0,0] where the x axis ends. The independent of disease prevalence. 8 as a part of a whole number. In
upper end of the y axis would be Applications of the ROC curve- other words, it is (sensitivity +
the ideal test with 100% sensitivity Any ROC curve helps serve the specificity) – 1. For a test with
and 100% specificity [1,1]. If we following four purposes 10 poor diagnostic accuracy, Youden’s
were to draw yet another x-axis at index equals 0, and a perfect test
a. Finding the cut off that least
the top parallel to the one below, will have a Youden’s index of 1.
misclassifies diseased and non-
its outer end would depict 100% diseased individuals
sensitivity and 0% specificity [0,1]
Diagnostic Odds Ratio
b. Assessing the discriminatory [A Summary Statistic]
[Figure 1]. The line that connects
ability of the test
the beginning of the lower x-axis to
the end of the upper x-axis is called c. Comparing the discriminatory The Diagnostic odds ratio [DOR]
the line of equality or random ability of two or more is yet another summary statistic for
chance line where x [false positive] diagnostic tests for assessing diagnostic accuracy, that is used for
= y [true positive]. Thus, any ROC the same disease the evaluation of the discriminative
curve that appears below this line abilities of diagnostic procedures
d. C o m p a r i n g t w o o r m o r e
indicates that the test performs as also for the comparison of
observers performing the same
worse than random guessing. diagnostic accuracies between two
test [inter- observer variability]
or more diagnostic tests. DOR of
Each point on the ROC curve
represents a sensitivity-specificity
The Youden’s index [a a test is defined as the ratio of the
odds of positivity in individuals
pair corresponding to a certain Summary Statistic] with disease relative to the odds
decision threshold. An ideal of positivity in individuals without
test would be one that has 100% It is useful to summarize the
information from a ROC curve disease. It is calculated similar to
sensitivity and 100% specificity and the odds ratio as seen in an earlier
thus the curve will pass through into a single statistic or index.
One of the commonly used indices article 12 as a cross product from
the upper left corner [Figure 1]. the 2 x 2 [Table 1] and given by the
Since no test is really ideal and we in the Youden’s index “J”. This
index gives the maximum vertical formula
tradeoff between sensitivity and
specificity, the closer the curve is distance from the line of equality DOR = TP x TN ÷ FP x FN
to the upper left corner, the better to point [x, y] [Figure 1]. In other DOR as seen with its calculation
is its accuracy. The area under words, the Youden index J is that depends significantly on the
point on the ROC curve that is
90 Journal of The Association of Physicians of India ■ Vol. 65 ■ July 2017

Table 3: A 2 x 2 table depicting the Table 4: Diagnostic odds ratios for varying combinations of sensitivity and
calculation of the diagnostic specificity13
odds ratio as a cross product
Specificity Sensitivity [%]
ratio [%] 50 60 70 80 90 95 99
Disease Disease 50 1 2 2 4 9 19 99
present absent 60 2 2 4 6 14 29 149
Test positive TP FP 70 2 4 5 9 21 44 231
Test negative FN TN 80 4 6 9 16 36 76 396
sensitivity and specificity of a 90 9 14 21 36 81 171 891
test. A test with a high specificity 95 19 29 44 76 171 361 1881
and sensitivity [i.e., low rates of 99 99 149 231 396 891 1881 9801
false positives and false negatives]
will have a high DOR. It is also
Statistical Tests to be Used we could have in our “disease”
population, patients with very
important to remember here that when Diagnostic Tests are mild diabetes at one end to severe
the same DOR may be achieved Compared or even uncontrolled diabetes at
with different combinations of the other end of the spectrum. Any
sensitivity and specificity. As an When two screening or
diagnostic test study that limits the
illustration, the DOR of 4 can have diagnostic tests are conducted on
diabetic patients to the “sickest
four combinations of sensitivity the same patient, the results would
of the sick” will overestimate the
and specificity [Table 4]. 13 amount to “paired” data and since
sensitivity of a test, while similarly,
the outcomes are either positive or
another study that uses only the
Reporting of Studies negative, these constitute “binary
“wellest of the well” [those who
using Diagnostic Tests - outcomes. The McNemar’s test is
are truly non-diabetic; for instance,
used for this type of comparison.
The STARD and QUADAS When these two tests are conducted
the very young] will overestimate
specificity. 18
Checklists on independent populations, then
we use the chi-square or Fisher’s Another bias is the “imperfect
STARD stands for “Standards gold standard” bias. 19 When a new
exact test. 16
for Reporting Diagnostic Accuracy test [also called as the index test]
Studies” and is a checklist of Understanding Biases is being tested, it is compared
n = 30 items developed by the with an existing “gold standard”
S TA R D s t e e r i n g g r o u p ; a n
when Using Diagnostic
or reference test. An ideal gold
independent group of researchers Tests - Spectrum Bias standard test would be one that
who formulated this checklist and the Imperfect Gold “rules in” ALL patients with disease
in an attempt to ensure both and “rules out” ALL those without.
Standard Bias
completeness and transparency Unfortunately, gold standards are
of reporting by authors and also An important and often rarely perfect and can themselves
for editors and peer reviewers overlooked aspect of diagnostic misclassify those with and without
to assess adequacy and quality tests evaluation is spectrum bias. disease leading to what we call an
of information. Authors need to In general, patients who present “imperfect gold standard”. Let us
use this checklist in manuscripts later in the course of a disease are understand this with the example of
that report studies that involve easier to diagnose than those who malaria diagnosis. The current gold
screening or diagnostic tests present early, as with the latter, standard is the peripheral smear.
and reporting their accuracy. signs maybe subtle and difficult to In the hands of trained and expert
STARD can be viewed at http:// pick up. Spectrum bias is a form of technicians, the test sensitivity
www.stard-statement.org/. 14 The selection bias that results when a is 50 parasites/ml of blood and
Quality Assessment of Diagnostic test is used for a disease that has a results are made available within
Accuracy Studies (QUADAS - 2) wide spectrum of severity. 17 Thus, 30 minutes. 20 The use of this “gold
tool is a 14-item checklist to help values of sensitivity and specificity standard” will logically result
in the evaluation of diagnostic obtained for any test are driven by in declaring parasitemias of less
accuracy studies primarily for the population that is being studied than 50parasites/ml as falsely
use in preparing and presenting and different populations would negative. The polymerase chain
systematic reviews. 15 yield different values of the two reaction [PCR], on the other hand,
metrices. that detects specific nucleic acid
Let us understand this with an sequences of the parasite has
example. If we are evaluating a test a much higher sensitivity at 5
for detecting patients with diabetes, parasites/ml. However, it is time
consuming, technically demanding,
Journal of The Association of Physicians of India ■ Vol. 65 ■ July 2017 91

expensive and also detects non clinicians who treat patients and Accuracy: Basic Definitions.  EJIFCC 2009;
-viable parasites that may be clinician- researchers who interpret 19:203-211.
present even after successful anti- evidence need to work in tandem. 10. Kumar R, Indrayan A. Receiver Operating
malarial treatment and can confuse This enables better linkage of Characteristic [ROC] curve for medical
researchers. Ind Peds 2011; 48:277-287.
the treating physician. 21 Thus, with results of the diagnostic testing
its inherent limitations of much with the patient. When coupled 11. Ruopp MD, Perkins NJ, Whitcomb BW,
Schisterman EF. Youden Index and
lower sensitivity [relative to the with continued monitoring of Optimal Cut-Point Estimated from
PCR], the peripheral smear still the effectiveness of these tests, Observations Affected by a Lower Limit of
remains the “gold standard” [albeit we would ensure both optimal Detection. Biometrical journal Biometrische
imperfect] for the diagnosis of outcomes for an individual patient Zeitschrift 2008; 50:419-430.
malaria. Some other biases include as also decisions that would drive 12. Gogtay NJ, Deshpande S, Thatte UM.
uninterpretable or indeterminate health policy for nations. Measures of association. J Assoc Phy Ind
2016; 64:70-73.
test bias and inter-observer bias. 10 Acknowledgements
13. http:// methods. cochrane. org/ sites/
Conclusions The authors are grateful to methods. cochrane. org. sdt/ files/
Few topics in the medical field Dr. Seema Kembhavi, from the public/ uploads/ DTA% 20Handbook%
are more important than screening Department of Radiodiagnosis at 20Chapter%2011%20201312.pdf, accessed
and diagnostic tests as these are the Tata Memorial Hospital for on 3rd June 2017.
ordered nearly every day as an constructive inputs that helped 14. http://www.stard-statement.org/, accessed
important aid to clinical decision refine the manuscript. on 13th May 2017.
making. Diagnoses are made based 15. Whiting PF, Rutjes AW, Westwood ME,
on a combination of patient history References Mallett S, Deeks JJ, Reitsma JB, LeeflangMM,
Sterne JA, Bossuyt PM; QUADAS-2 Group..
and physical examination. Tests 1. Parikh R, Mathai A, Parikh S, Chandra QUADAS-2: a revised tool for the quality
are often ordered to confirm initial Sekhar G, Thomas R. Understanding and assessment of diagnostic accuracy studies.
impressions or rule out alternatives, using sensitivity, specificity and predictive Ann Intern Med 2011; 18;155:529-36.
and it is estimated that 10% of all values.  Indian Journal of Ophthalmology 16. Deshpande SP, Gogtay NJ, Thatte UM.
diagnoses are not considered final 2008; 56:45-50. Which test where? J Assoc Phy Ind 2016;
until clinical laboratory testing 2. Bland MJ. The odds ratio. BMJ 2000; 64:64-66.
320:1468.
is complete. 22 The utility of any 17. Schmidt LR, Factor ER. Understanding
test must be assessed bearing in 3. Gill CJ, Sabin L, Schmid CH. Why clinicians sources of bias in Diagnostic Accuracy
are natural Bayesians. BMJ 2005; 330:1080– Studies. Arch Pathol Lab Med 2013; 137:558-
mind its discriminatory ability [to
3. 565.
distinguish between health and
4. McGee S. Simplifying Likelihood 18. Willis HB. Spectrum bias- why clinicians
disease], the nature and severity need to be cautious when applying
Ratios. Journal of General Internal Medicine
of the disease under question, the 2002; 17:647-650. diagnostic test studies. Farm Pract 2008;
ease of availability of the tests and 5. Manyahi JP, Musau P, Mteta AK. Diagnostic
25:390-96.
risks associated with their use, values of digital rectal examination, 19. Kohn AM, Carpenter RC, Newman BT.
understanding the several diverse prostate specific antigen and trans-rectal Understanding the direction of bias
metrics [with their limitations] that ultrasound in men with prostatism. East in studies of diagnostic test accuracy.
go into interpreting the results, cost Afr Med J 2009; 86:450-3. Academic Emergency Medicine 2013;
20:1194-1206.
considerations and finally impact 6. Fagan TJ. Nomogram for Bayes theorem. N
Engl J Med 1975; 293:257. 20. Moody A. Rapid Diagnostic Tests for Malaria
on patient management based on
Parasites.  Clinical Microbiology Reviews
the results of the test. 7. Toyoda Y, Nakayama T, Kusunoki Y, Iso H,
2002; 15:66-78.
Suzuki T. Sensitivity and specificity of lung
Research studies that publish cancer screening using chest low-dose 21. Srinavasan SAH, Moody A, Chiodini PL. 
findings using diagnostic tests computed tomography.  British Journal of Comparison of blood-film microscopy,
must be critically appraised Cancer 2008; 98:1602-1607. the OptiMAL® dipstick, Rhodamine 123
using the STARD criteria as also and PCR for monitoring anti-malarial
8. Linden A. Measuring diagnostic and
treatment.  Ann Trop Med Parasitol 2000;
an appreciation of whether the predictive accuracy in disease
94:227–232.
population on whom the test was management: an introduction to receiver
operating characteristic (ROC) analysis. 22. Wahner – Roedler DL, Chaliki SS, Bauer BA
used is similar or different from the Journal of Evaluation in Clinical Practice et al. Who makes the diagnosis? The role of
one that a physician actually sees in 2006; 12:132-39. clinical skills and diagnostic test results. J
his practice. Finally, laboratorians 9. Šimundić A-M. Measures of Diagnostic
Eval Clin Pract 2007; 13:321–5.
who carry out diagnostic testing,

S-ar putea să vă placă și