Binary Cross-Entropy Loss Calculator
Loss function for classification.
Formula first
Overview
Binary Cross-Entropy Loss, or Log Loss, quantifies the difference between two probability distributions: the actual binary labels and the predicted probabilities. It applies a heavy logarithmic penalty to predictions that are confident yet incorrect, guiding optimization algorithms like gradient descent to improve model accuracy.
Symbols
Variables
y = True Label (0/1), p = Predicted Prob, L = Loss
Apply it well
When To Use
When to use: This function is specifically designed for binary classification tasks where the output is a single probability value between 0 and 1. It is most commonly used as the objective function for logistic regression and neural networks that utilize a sigmoid activation function in the output layer.
Why it matters: Unlike simple classification error, this loss function is differentiable, which is essential for backpropagation in deep learning. It ensures that the model is penalized more severely for being 'confidently wrong' than for being 'uncertainly wrong,' leading to more robust probabilistic predictions.
Avoid these traps
Common Mistakes
- Using log base 10 (use natural log).
- p=0 or p=1 exactly (causes infinity).
One free problem
Practice Problem
A medical diagnostic model predicts a 0.85 probability that a patient has a specific condition. If the patient actually has the condition (y=1), calculate the binary cross-entropy loss.
Solve for:
Hint: Since y=1, the formula simplifies to L = -ln(p).
The full worked solution stays in the interactive walkthrough.
References
Sources
- Wikipedia: Cross-entropy
- Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
- Deep Learning (Goodfellow, Bengio, Courville)
- Pattern Recognition and Machine Learning (Bishop)
- Goodfellow, Bengio, and Courville Deep Learning
- Bishop Pattern Recognition and Machine Learning
- Standard curriculum — Machine Learning