[[Classifiers]] are algorithms that map input data to a class (or category). Binary classifiers are algorithms that map input data to **one** of **two** possible classes (e.g. 0 or 1).
The performance of binary classifiers can be evaluated using a **confusion matrix**.
![[Pasted image 20250324234120.png]]
The numbers in the [[Matrices|Matrix]] are example numbers there.
- TP = True Positive
- FP = False Positive
- FN = False Negative
- TN = True Negative
Using these numbers we can calculate Accuracy, Precision, and Sensitivity. These metrics actually apply to **categorical classifiers** (i.e. not *just* binary ones, but ones with an arbitrary number of possible [[Enumeration|enumerated]] classes).
# Accuracy
What percent of values are correct.
`accuracy = true results / all results`
# Precision
The percent of **positive** results that are correct.
`precision = true positives / (true & false positives)`
# Sensitivity
The percentage of cases the model correctly identified as positive.
`sensitivity = true positives / (true positives & false negatives)`
A test that **always returns positive** would have sensitivity of 100%.
# Test Threshold
You can "raise" or "lower" the bar to trade off between **precision** and **sensitivity**. This literally looks like moving a horizontal line in a hypothetical [[scatter chart]] up and down to effect where the cutoff is between the model classifying things as positive.
****
# More
## Source
- Grad School
## Related