Big-O Complexity
Upper bound of algorithm runtime.
This public page keeps the free explanation visible and leaves premium worked solving, advanced walkthroughs, and saved study tools inside the app.
Core idea
Overview
Big-O complexity provides an asymptotic upper bound on the growth rate of an algorithm's execution time or space requirements relative to the input size n. This formula approximates the actual runtime T by relating the complexity class function f(n), often modeled as n to the power of k, to a system-specific constant C.
When to use: Apply this equation when benchmarking software performance to estimate how scaling data volume will impact execution time. It is particularly useful when you need to solve for the constant overhead C to compare different hardware environments running the same algorithm.
Why it matters: Understanding these growth relationships allows engineers to prevent system crashes by predicting when an input size will exceed the available time budget. It is the fundamental language used to evaluate the efficiency of data structures and algorithms in computer science.
Symbols
Variables
T(n) = Est. Operations, n = Input Size, k = Order (k), C = Constant Factor
Walkthrough
Derivation
Understanding Big-O Complexity
Big-O notation gives an asymptotic upper bound on how an algorithm’s time or space usage grows with input size n, focusing on growth rate rather than exact constants.
- Input size n is large (asymptotic comparison).
- Constant factors and lower-order terms are ignored when classifying growth rates.
- A basic operation count (or step count) is a reasonable proxy for runtime.
State the formal definition:
For sufficiently large n, f(n) is bounded above by a constant multiple of g(n).
Drop constants and lower-order terms:
As n grows, dominates 3n and 5, so the overall growth rate is quadratic.
Note: Big-O is an upper bound; tight bounds are sometimes written with ().
Result
Source: AQA A-Level Computer Science — Fundamentals of Algorithms
Visual intuition
Graph
Graph unavailable for this formula.
The graph plots the independent variable on the x-axis against the estimated operations on the y-axis, forming a curve defined by the function f(n). The shape depends on the specific complexity class of f(n), typically resulting in an increasing curve that passes through the origin if f(0) is zero. As the independent variable increases, the y-axis value grows at a rate determined by the chosen function.
Graph type: power_law
Why it behaves this way
Intuition
A graph where the observed runtime T(n) for increasing input n is eventually bounded above by a scaled version C × f(n) of the algorithm's characteristic growth function.
Free study cues
Insight
Canonical usage
This equation relates the execution time T(n), typically measured in units of time (e.g., seconds, milliseconds, operations), to a dimensionless input size n through a dimensionless complexity function f(n)
Common confusion
Students sometimes attempt to assign physical units to the input size n or the complexity function f(n), which are fundamentally dimensionless counts or ratios. The constant C absorbs the actual time unit, making T(n)
Dimension note
The input size n and the complexity function f(n) are dimensionless quantities, representing counts or ratios. The constant C and the runtime T(n) share the dimension of time.
Unit systems
One free problem
Practice Problem
A sorting algorithm with a complexity of O(n²) has a constant factor C of 0.5 milliseconds. If the input size n is 100 elements, calculate the total estimated execution time T in milliseconds.
Solve for:
Hint: Square the input size n and then multiply by the constant C.
The full worked solution stays in the interactive walkthrough.
Where it shows up
Real-World Context
When comparing O(n²) vs O(n log n) sorting, Big-O Complexity is used to calculate Est. Operations from Input Size, Order (k), and Constant Factor. The result matters because it helps evaluate model behaviour, algorithm cost, or prediction quality before relying on the output.
Study smarter
Tips
- Focus on the highest power of n and ignore lower-order terms.
- The constant C accounts for hardware speed and implementation efficiency.
- Linear growth (k=1) scales directly, while quadratic growth (k=2) scales with the square of the input.
Avoid these traps
Common Mistakes
- Mixing n and k.
- Treating Big-O as exact runtime.
Common questions
Frequently Asked Questions
Big-O notation gives an asymptotic upper bound on how an algorithm’s time or space usage grows with input size n, focusing on growth rate rather than exact constants.
Apply this equation when benchmarking software performance to estimate how scaling data volume will impact execution time. It is particularly useful when you need to solve for the constant overhead C to compare different hardware environments running the same algorithm.
Understanding these growth relationships allows engineers to prevent system crashes by predicting when an input size will exceed the available time budget. It is the fundamental language used to evaluate the efficiency of data structures and algorithms in computer science.
Mixing n and k. Treating Big-O as exact runtime.
When comparing O(n²) vs O(n log n) sorting, Big-O Complexity is used to calculate Est. Operations from Input Size, Order (k), and Constant Factor. The result matters because it helps evaluate model behaviour, algorithm cost, or prediction quality before relying on the output.
Focus on the highest power of n and ignore lower-order terms. The constant C accounts for hardware speed and implementation efficiency. Linear growth (k=1) scales directly, while quadratic growth (k=2) scales with the square of the input.
References
Sources
- Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein
- Wikipedia: Big O notation
- Introduction to Algorithms, 3rd Edition by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein
- Big O notation - Wikipedia
- Cormen, Leiserson, Rivest, Stein Introduction to Algorithms
- AQA A-Level Computer Science — Fundamentals of Algorithms