Deconfusion Matrix
This is an excerpt from my introduction to p-values. Different terms are used depending on whether you're discussing scientific hypothesis testing vs. medical tests vs. machine learning, but it's the same concepts underlying all fields.
Hint: Try setting the base rate very low, and observe how some numbers may not match your intuition.
/ ( + ) True positive rate/sensitivity/statistical power/recall: %
/ ( + ) False negative rate: %
/ ( + ) True negative rate/Specificity: %
Prevalence/base rate: %
: True positives. ()
: False positives, a.k.a. type I errors. ()
: True negatives. ()
: False negatives, a.k.a. type II errors. ()
+ : Overall chance that a result comes up positive. ()
+ : Accuracy, the chance that a result is correct. ()
/ ( + ): Positive predictive value a.k.a. "precision", the chance that a positive result is correct. ()
/ ( + ): Negative predictive value, the chance that a negative result is correct. ()
/ ( + ): False discovery rate, the chance that a positive result is wrong. ()
( / ) / ( / ): Odds ratio. ()
( / ( + )) / ( / ( + )): Relative risk. ()