Confusion Matrix Calculator

Enter confusion matrix counts to compute precision, recall, specificity, F1 score, and more. Understand how your model performs on positives and negatives.

Accuracy

85.71%

Overall correctness

Precision

90.91%

Quality of positive predictions

Recall (Sensitivity)

83.33%

Coverage of actual positives

Specificity

88.89%

True negative rate

F1 Score

86.96%

Harmonic mean of precision and recall

Balanced Accuracy

86.11%

Average of sensitivity & specificity

MetricValueInterpretation
False Positive Rate11.11%Share of negatives incorrectly classified as positive
False Negative Rate16.67%Share of positives missed by the model
Negative Predictive Value80.00%Confidence in negative predictions
Positive Support60Actual positive examples (TP + FN)
Negative Support45Actual negative examples (TN + FP)
Total Samples105Sum of all confusion matrix entries

How to Use This Calculator

  1. Enter confusion matrix counts for true/false positives and negatives.
  2. Check the summary cards for headline metrics such as accuracy, precision, recall, and F1 score.
  3. Use the detailed table to review error rates, predictive values, and class supports.
  4. Experiment with different numbers to understand how each component affects performance.

Formula

Accuracy = (TP + TN) / (TP + FP + FN + TN)

Precision = TP / (TP + FP)

Recall = TP / (TP + FN)

Specificity = TN / (TN + FP)

F1 = 2 · Precision · Recall / (Precision + Recall)

Balanced Accuracy = (Recall + Specificity) / 2

These metrics summarize different aspects of classifier performance. Select the ones that match your project's goals (for example, precision for fraud detection or recall for medical screening).

Full Description

The confusion matrix is the foundation of binary classification evaluation. It tabulates how many positive and negative samples were correctly or incorrectly predicted. From these counts, you can derive metrics that balance precision, recall, and error rates according to your priorities.

This calculator streamlines that process. Enter raw counts from model predictions and instantly view the resulting metrics, without needing spreadsheets or manual formulas.

Frequently Asked Questions

What if precision or recall is undefined?

If the denominator of a metric is zero (for example, no predicted positives), the metric is undefined. The calculator displays “—” to indicate this situation.

Can this handle multi-class confusion matrices?

This version focuses on binary classification. For multi-class problems, compute one-vs-rest metrics per class or extend the calculator with per-class confusion data.

How do support values help?

Support indicates how many samples belong to each actual class. It helps you understand class imbalance, which can heavily influence metrics like accuracy.

What are good thresholds for these metrics?

Thresholds depend on the application: high recall may matter more in medical screening, while precision could be critical for spam detection. Use domain knowledge to set practical targets.