```html Loss Functions Cheat Sheet & Calculator | MLStat.com
Instant ML metric calculations — no signup required

Machine Learning Metrics
Calculated in Seconds

The interactive reference ML practitioners bookmark. Formula derivations, instant calculators, and copy-paste code snippets — all on one page per topic.

Start Calculating Browse All Topics

10 metric calculators ready below · No account needed

Machine Learning Loss Functions Explained

Every machine learning model learns by minimizing a loss function — a mathematical measure of how far its predictions deviate from the truth. Choosing the right loss function is one of the most consequential decisions you'll make as an ML practitioner. This interactive cheat sheet walks you through six essential loss functions, shows you exactly how each one penalizes errors, and gives you a live calculator to experiment with custom inputs.

Quick Reference — Six Core Loss Functions

Each loss function has a distinct formula, error profile, and best-use scenario. The table below gives you a side-by-side comparison.

Loss Function Formula Error Penalty Best For
MSE (L2) regression (y – ŷ)² Quadratic — large errors punished heavily Gaussian noise, smooth optimization
MAE (L1) regression |y – ŷ| Linear — all errors weighted equally Robust to outliers, interpretable
Cross-Entropy classification –[y log(ŷ) + (1–y) log(1–ŷ)] Logarithmic — confident wrong predictions penalized severely Binary & multi-class classification
Hinge SVM max(0, 1 – y·ŷ) Margin-based — only penalizes misclassifications inside margin Support vector machines, max-margin classifiers
Focal imbalance –(1 – ŷ)ᵞ log(ŷ) Modulated cross-entropy — down-weights easy examples Class imbalance, object detection
Huber robust L2 if |δ|≤c else L1 Quadratic near zero, linear beyond threshold Mixed noise, outlier-robust regression

Mean Squared Error (MSE) — L2 Loss

The workhorse of regression problems. MSE squares the difference between predicted and actual values, which means large errors contribute disproportionately to the total loss.

MSE

Formula & Behavior

L = (1/n) Σ(yᵢ – ŷᵢ)²

Because errors are squared, an error of 5 contributes 25× the loss of an error of 1. This makes MSE highly sensitive to outliers — a single bad prediction can dominate the gradient. MSE is differentiable everywhere and leads to smooth, convex optimization in linear models.

When to Use

Choose MSE when your target variable has approximately Gaussian noise and you want the model to strongly avoid large errors. It's the default for linear regression, neural network regression heads, and any scenario where outlier sensitivity is acceptable — or even desired.

How MSE penalizes error magnitude — quadratic growth means large errors dominate.

Mean Absolute Error (MAE) — L1 Loss

MAE takes the absolute difference between prediction and target. Every unit of error increases the loss by the same amount — no amplification, no discount.

MAE

Formula & Behavior

L = (1/n) Σ|yᵢ – ŷᵢ|

MAE is linear in the error, so it treats all deviations equally. The gradient is constant (±1) except at zero, where it's undefined. This makes optimization slightly trickier (subgradient methods), but the payoff is robustness: a single outlier can't warp the loss surface the way it does with MSE.

🛡️

When to Use

MAE shines when your data contains outliers or long-tailed noise. It's the go-to loss for median regression, weather forecasting, and any application where you prefer a model that's "mostly right" over one that sacrifices many small errors to avoid one huge one.

MAE grows linearly with error — constant penalty per unit of deviation.

Cross-Entropy Loss (Log Loss)

The default loss for classification. Cross-entropy measures the distance between two probability distributions — the true labels and the model's predicted probabilities.

CE

Formula & Behavior

L = –[y log(ŷ) + (1–y) log(1–ŷ)]

When the model is confidently wrong — predicting 0.99 for the wrong class — the log term explodes to infinity. This creates a huge gradient signal that rapidly corrects the mistake. Cross-entropy is strictly convex for logistic regression and works beautifully with softmax output layers.

🎯

When to Use

Use cross-entropy for any binary or multi-class classification problem: spam detection, image classification, sentiment analysis. It pairs naturally with sigmoid (binary) or softmax (multi-class) activation functions. It's also the foundation for more advanced losses like focal loss.

Cross-entropy penalizes confident wrong predictions exponentially.

Hinge Loss (SVM Loss)

Hinge loss is the loss function behind support vector machines. It doesn't penalize correctly classified examples that fall outside the margin — only those inside the margin or on the wrong side.

H

Formula & Behavior

L = max(0, 1 – y·ŷ)

If the true label y and prediction ŷ have the same sign and |ŷ| ≥ 1, the loss is zero. Otherwise, the loss increases linearly. This "margin" property means the model only cares about the hardest examples — the