Documente Academic
Documente Profesional
Documente Cultură
Andy Vail
Biostatistics
University of Manchester
FOCUS Training Day, 30th April 2012
Opener
Definition
Reliability analysis
• Categories
– Cohen’s Kappa is a measure of the strength of agreement
between two categorisations, adjusted for chance
agreement
– kappa = (observed – chance) / (maximum - chance)
• Continuous
– Bland-Altman: plot differences against average value
Proof of concept 2
0 5 10
Frequency
10 020
0 5 10
Discrimination analysis
Y a b
True Diagnosis
Y N Sensitivity = 90/100 = 90%
Test Result
Confidence intervals
Y a b
• PPV: a/(a+b)
N c d
• NPV: d/(c+d)
• LR+: sens/(1-spec)
• LR-: (1-sens)/spec
• DOR: (a/c)/(b/d) = ad/bc
Predictive values (PPV & NPV)
• Depend on prevalence
– meaningless in Case-Control design
– may not transfer to different settings
• Positive Predictive Value
– Proportion of positive results that are correct
– PPV = (Prev x Sens) / [Prev x Sens + (1- Prev)(1-Spec)]
Likelihood Ratios
Example
Y 90 20
• PPV: 90/(90+20) = 82%
N 10 80
• NPV: 80/(10+80) = 89%
• LR+: 90/20 = 4.5
• LR-: 10/80 = 0.125
• DOR: (90/10)/(20/80) = 36
ROC curves
0 5 10
test result
Determining a cut-off
Minimum
threshold
1.0
Perfect Test
0.8
Sensitivity
0.4 0.6
Guessing
0.2
0.2
0.0
0.0
Maximum
threshold 0.0 0.2 0.4 0.6
0.6 0.8
0.8 1.0
1.0
1-Specificity
1-Specificity
1.0
0.8 Preferable if
high sens key
Sensitivity
0.6
‘optimal’ to
0.4
minimise errors
0.2
0.2
Preferable if
high spec key
0.0
0.0
AuROC=0.94
0.4
Guessing
0.2
0.2
AuROC=0.59
AuROC=0.5
0.0
0.0
STARD
Clinical/cost effectiveness
Prosecutor’s fallacy
Prosecutor’s fallacy
Summary