Data & ComputingMachine LearningUniversity
AQAAPOntarioNSWCBSEGCE O-LevelMoECAPS

Binary Cross-Entropy Loss

Loss function for classification.

Understand the formulaSee the free derivationOpen the full walkthrough

This public page keeps the free explanation visible and leaves premium worked solving, advanced walkthroughs, and saved study tools inside the app.

Core idea

Overview

Binary Cross-Entropy Loss, or Log Loss, quantifies the difference between two probability distributions: the actual binary labels and the predicted probabilities. It applies a heavy logarithmic penalty to predictions that are confident yet incorrect, guiding optimization algorithms like gradient descent to improve model accuracy.

When to use: This function is specifically designed for binary classification tasks where the output is a single probability value between 0 and 1. It is most commonly used as the objective function for logistic regression and neural networks that utilize a sigmoid activation function in the output layer.

Why it matters: Unlike simple classification error, this loss function is differentiable, which is essential for backpropagation in deep learning. It ensures that the model is penalized more severely for being 'confidently wrong' than for being 'uncertainly wrong,' leading to more robust probabilistic predictions.

Symbols

Variables

y = True Label (0/1), p = Predicted Prob, L = Loss

True Label (0/1)
Variable
Predicted Prob
Variable
Loss
Variable

Walkthrough

Derivation

Derivation of Binary Cross-Entropy (Log Loss)

Derives the binary cross-entropy loss as the negative log-likelihood for independent Bernoulli-labelled data.

  • Targets are binary labels: \{0,1\}.
  • Observations are independent (i.i.d. for the likelihood factorization).
  • Model outputs satisfy 0 < _i < 1 (probabilities).
1

Write the Bernoulli Likelihood:

If =1 the term contributes _i; if =0 it contributes (1-_i). Independence lets us multiply across i.

2

Take the Log-Likelihood:

Log turns products into sums and makes optimization easier.

3

Convert to a Minimization Objective:

Minimizing the negative average log-likelihood is equivalent to maximizing the likelihood; this is binary cross-entropy.

Result

Source: Standard curriculum — Machine Learning

Visual intuition

Graph

The graph is a logarithmic curve that approaches infinity as the independent variable (p) nears zero or one. It features a U-shaped profile where the loss (L) is minimized when the prediction matches the target value (y), reflecting the penalty for incorrect classifications.

Graph type: logarithmic

Why it behaves this way

Intuition

Imagine a curved penalty landscape where the 'depth' of the curve represents the loss. The landscape is flat (zero loss) when predictions perfectly match the true labels, but it steeply drops into deep valleys (high loss).

The calculated loss value for a single prediction.
A higher loss indicates a greater discrepancy between the model's predicted probability and the actual outcome, signaling a need for the model to adjust its parameters.
The true binary label for the instance (0 for negative class, 1 for positive class).
Represents the ground truth that the model aims to predict correctly.
The model's predicted probability that the true label is 1 (the positive class).
Reflects the model's confidence in the positive outcome, ranging from 0 (certainly negative) to 1 (certainly positive).

Signs and relationships

  • -: The natural logarithm of a probability (a value between 0 and 1) is always negative or zero. The leading negative sign inverts this value, ensuring that the loss function is non-negative and can be minimized during
  • ln(): The logarithmic function imposes a heavy penalty when the model makes a confident but incorrect prediction. For instance, if the true label 'y' is 1 but 'p' is very close to 0, 'ln(p)' becomes a large negative number

Free study cues

Insight

Canonical usage

Binary Cross-Entropy Loss is a dimensionless quantity that quantifies the error between predicted probabilities and true binary labels in classification tasks.

Common confusion

A common mistake is to attempt to assign units to the loss value (L) or to use percentages directly in the formula for 'p' without converting them to decimal probabilities.

Dimension note

Binary Cross-Entropy Loss is inherently dimensionless because it operates on probabilities and binary labels, which are dimensionless quantities.

Unit systems

dimensionless - Represents the true binary label, typically 0 or 1.
dimensionless - Represents the predicted probability, a value between 0 and 1.
dimensionless - The resulting loss value is a dimensionless scalar.

Ballpark figures

  • Quantity:

One free problem

Practice Problem

A medical diagnostic model predicts a 0.85 probability that a patient has a specific condition. If the patient actually has the condition (y=1), calculate the binary cross-entropy loss.

True Label (0/1)1
Predicted Prob0.85

Solve for:

Hint: Since y=1, the formula simplifies to L = -ln(p).

The full worked solution stays in the interactive walkthrough.

Where it shows up

Real-World Context

In training a cat/dog classifier, Binary Cross-Entropy Loss is used to calculate Loss from True Label (0/1) and Predicted Prob. The result matters because it helps judge uncertainty, spread, or evidence before making a conclusion from the data.

Study smarter

Tips

  • Avoid input probabilities of exactly 0 or 1 to prevent numerical instability or undefined natural logs.
  • The loss value will be 0 only if the predicted probability perfectly matches the target label.
  • In multi-class scenarios, use Categorical Cross-Entropy instead of this binary variation.

Avoid these traps

Common Mistakes

  • Using log base 10 (use natural log).
  • p=0 or p=1 exactly (causes infinity).

Common questions

Frequently Asked Questions

Derives the binary cross-entropy loss as the negative log-likelihood for independent Bernoulli-labelled data.

This function is specifically designed for binary classification tasks where the output is a single probability value between 0 and 1. It is most commonly used as the objective function for logistic regression and neural networks that utilize a sigmoid activation function in the output layer.

Unlike simple classification error, this loss function is differentiable, which is essential for backpropagation in deep learning. It ensures that the model is penalized more severely for being 'confidently wrong' than for being 'uncertainly wrong,' leading to more robust probabilistic predictions.

Using log base 10 (use natural log). p=0 or p=1 exactly (causes infinity).

In training a cat/dog classifier, Binary Cross-Entropy Loss is used to calculate Loss from True Label (0/1) and Predicted Prob. The result matters because it helps judge uncertainty, spread, or evidence before making a conclusion from the data.

Avoid input probabilities of exactly 0 or 1 to prevent numerical instability or undefined natural logs. The loss value will be 0 only if the predicted probability perfectly matches the target label. In multi-class scenarios, use Categorical Cross-Entropy instead of this binary variation.

References

Sources

  1. Wikipedia: Cross-entropy
  2. Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
  3. Deep Learning (Goodfellow, Bengio, Courville)
  4. Pattern Recognition and Machine Learning (Bishop)
  5. Goodfellow, Bengio, and Courville Deep Learning
  6. Bishop Pattern Recognition and Machine Learning
  7. Standard curriculum — Machine Learning