Quick Reference Varies / Varies
// Analyze nested loops
int total = 0;
for (int i = 0; i < n; i++) {        // O(n)
    for (int j = i; j < n; j++) {     // O(n - i)
        total++;                       // O(1)
    }
}
// Total: sum(n-i) for i=0..n-1 = O(n^2)
// Drop constants and lower-order terms
Use when
  • Analyzing algorithm performance
  • Comparing approaches
  • Interview complexity questions
  • Nested loop analysis
Watch out
  • Confusing average and worst case
  • Forgetting hidden costs (e.g., string concatenation is O(n))
  • Not accounting for space used by recursion stack
Pattern

Complexity Analysis

Master Big O notation, derive time and space complexity step by step, and learn the shortcuts that make complexity analysis second nature in interviews.

Roadmap

  1. What Is Big O?
  2. Common Complexities
  3. Deriving Time Complexity
  4. Deriving Space Complexity
  5. Amortized & Average Case
  6. Recurrence Relations
  7. Pattern Complexity Reference
  8. Shortcuts & Traps
  9. Walkthrough: Deriving Complexity
  10. Key Takeaways
Section 01

What Is Big O?

Big O notation describes how an algorithm's resource usage grows as the input size increases. It strips away constants and low-order terms to focus on the dominant factor. The question it answers: "As n gets very large, what happens to the running time (or memory)?"

There are three related notations. In practice, interviews almost always mean Big O (upper bound):

The Core Definition

We say f(n) = O(g(n)) if there exist constants c > 0 and n0 such that for all n ≥ n0:

f(n) ≤ c · g(n)

In plain English: past some threshold, f(n) never grows faster than a constant multiple of g(n). That is why we drop constants — they get absorbed into c. And we drop lower-order terms because the dominant term dwarfs everything else for large n.

💡
Interview rule: Big O always means worst-case unless you explicitly state otherwise. If an interviewer asks "what's the complexity?", they want the Big O worst-case bound.
Section 02

Common Complexities

Every algorithm falls into one of these growth classes. Memorize this hierarchy — you will reference it in every interview. Listed from fastest to slowest:

O(1)
Constant
O(log n)
Logarithmic
O(n)
Linear
O(n log n)
Linearithmic
O(n²)
Quadratic
O(2ⁿ)
Exponential
O(n!)
Factorial

To put these in perspective, here is how many operations each complexity produces for n = 1,000:

Complexity n = 1,000 Verdict
O(1)1Instant
O(log n)~10Instant
O(n)1,000Fast
O(n log n)~10,000Fast
O(n²)1,000,000Borderline
O(2ⁿ)~10³°°Impossible
O(n!)~10²⁵⁶⁸Impossible
💡
Real-world analogy: O(1) is looking up a word in a dictionary by page number. O(log n) is binary searching the dictionary by flipping to the middle. O(n) is reading every page. O(n²) is comparing every page to every other page. O(2ⁿ) is listing every possible subset of pages.
Section 03

Deriving Time Complexity

Follow this systematic method to derive the time complexity of any algorithm. These four rules cover the vast majority of interview problems.

🔁

Rule 1: Count the Loops

Nested loops over the same collection multiply: two nested loops over n items = O(n²). Sequential loops add: O(n) + O(n) = O(n).

🌳

Rule 2: Identify Recursion

Recursive complexity = depth × work per level. A binary tree traversal does O(1) work at each of n nodes = O(n). A branching recursion that splits k ways has O(kⁿ) nodes.

🔍

Rule 3: Watch for Hidden O(n)

Operations that look O(1) but are actually O(n): string concatenation, array.includes(), array.slice(), array.splice(), and spread operators inside loops.

Rule 4: Add, Then Simplify

Sum all phases of the algorithm, then drop lower-order terms. O(n log n) + O(n) = O(n log n). The dominant term wins.

Let's apply these rules to a concrete example — the two-pointer approach for finding a pair that sums to a target in a sorted array:

Worked Example: Two-Pointer Pair Sum

function twoSum(nums, target) {
  let left = 0, right = nums.length - 1;  // O(1)

  while (left < right) {                   // Runs at most n times
    const sum = nums[left] + nums[right];  // O(1) per iteration
    if (sum === target) return [left, right];
    if (sum < target) left++;
    else right--;
  }
  return [];
}

Analysis: One while loop. Each iteration advances at least one pointer. The pointers start n apart and converge, so the loop runs at most n times. Each iteration does O(1) work. Total: O(n) time, O(1) space.

Section 04

Deriving Space Complexity

Space complexity measures how much extra memory grows with input size. The input itself usually does not count — only the auxiliary memory your algorithm allocates.

Ask yourself: "What new data structures or stack frames grow as n increases?"

Recursive DFS
O(h)
Stack frames = tree height. Balanced tree: O(log n). Skewed tree: O(n).
Iterative BFS
O(w)
Queue holds one level at a time. Width w is at most n/2 for a complete tree = O(n).
💡
Common mistake: Forgetting recursion stack space. A simple recursive DFS that uses no explicit data structures still uses O(h) space on the call stack. Always mention it.
Section 05

Amortized & Average Case

Amortized analysis looks at the average cost per operation over a sequence of operations, even if individual operations occasionally spike. Average case looks at the expected cost over all possible inputs.

Classic Example: Dynamic Array (ArrayList)

When you .push() onto a dynamic array and it runs out of capacity, the array doubles in size — copying all n elements. That single resize is O(n). But it only happens every n pushes.

Push 1  → capacity 1  (copy 0 elements)
Push 2  → capacity 2  (copy 1 element)
Push 3  → capacity 4  (copy 2 elements)
Push 5  → capacity 8  (copy 4 elements)
Push 9  → capacity 16 (copy 8 elements)
...
Total copies for n pushes: 1+2+4+8+...+n = 2n - 1

Total work for n pushes: n (for the pushes themselves) + 2n - 1 (for copies) = O(n) total. Divide by n operations: O(1) amortized per push.

Other amortized O(1) operations you should know:

💡
Interview tip: When citing amortized complexity, say it explicitly: "O(1) amortized per operation." If the interviewer asks about worst case, acknowledge the occasional O(n) spike but explain that the total across all operations is still O(n).
Section 06

Recurrence Relations

Recursive algorithms produce recurrence relations of the form T(n) = aT(n/b) + O(n^d), where a is the number of subproblems, n/b is each subproblem's size, and O(n^d) is the work done outside the recursive calls.

Master Theorem (Simplified)

For T(n) = aT(n/b) + O(n^d), compare d to log_b(a):

  • d < log_b(a)O(n^(log_b(a))) — recursion dominates
  • d = log_b(a)O(n^d · log n) — balanced
  • d > log_b(a)O(n^d) — top-level work dominates

Here are the most common recurrences you will encounter in interviews:

T(n) = 2T(n/2) + O(n)
O(n log n)

Merge Sort. Two halves, linear merge. d = 1, log_2(2) = 1, so d = log_b(a).

T(n) = T(n/2) + O(1)
O(log n)

Binary Search. One half, constant work. d = 0, log_2(1) = 0, so d = log_b(a).

T(n) = T(n/2) + O(n)
O(n)

Quickselect (avg). One half, linear partition. d = 1 > log_2(1) = 0, so top-level dominates.

T(n) = 3T(n/2) + O(n)
O(n^1.58)

Karatsuba multiply. Three subproblems of half size. d = 1 < log_2(3) ≈ 1.58.

Section 07

Pattern Complexity Reference

A complete reference table for all 27 algorithm patterns. The Brute Force column shows the naive approach; Optimal shows what the pattern achieves.

Pattern Brute Force Optimal Time Optimal Space Key Insight
Two Pointers O(n²) O(n) O(1) Sorted + pair → two ptrs
Sliding Window O(n·k) O(n) O(k) Expand/shrink window
Binary Search O(n) O(log n) O(1) Halves search space
Dynamic Programming O(2ⁿ) O(n·m) O(n·m) Cache subproblems
Backtracking O(nⁿ) O(n!) O(n) Prune early
Tree Traversal O(n) O(h) Visit each node once
Monotonic Stack O(n²) O(n) O(n) Push/pop once each
In-Place Reversal O(n)+O(n) O(n) O(1) Re-wire pointers
Fast & Slow Ptrs O(n)+O(n) O(n) O(1) Cycle detection
Prefix Sum O(n·q) O(n+q) O(n) Precompute sums
Matrix Traversal O(m·n) O(m·n) BFS/DFS on grid
DFS & BFS O(V+E) O(V) Visit each vertex once
Overlapping Intervals O(n²) O(n log n) O(n) Sort by start
Top K Elements O(n log n) O(n log k) O(k) Min-heap of size k
Trie O(N·L) O(L) per op O(N·L) One node per char
Union-Find O(n²) O(α(n)) per op O(n) Path compression
Topological Sort O(V·E) O(V+E) O(V+E) Kahn's BFS
Greedy O(2ⁿ) O(n log n) O(1) Sort + single pass
Bit Manipulation O(n log n) O(n) O(1) XOR cancels pairs
Monotonic Queue O(n·k) O(n) O(k) Deque max/min
Two Heaps O(n²) O(n log n) O(n) Max-heap + min-heap
Cyclic Sort O(n log n) O(n) O(1) Swap to correct index
Subsets & Combos O(n·2ⁿ) O(2ⁿ) O(2ⁿ) Inherent output size
Linked List Techniques O(n)+O(n) O(n) O(1) Dummy head + pointers
Design Patterns O(n) per op O(1) per op O(n) Combine structures
Segment Tree O(n·q) O(n + q log n) O(n) Range query tree
String Matching O(n·m) O(n+m) O(m) Failure function
Section 08

Shortcuts & Traps

These shortcuts let you derive complexity in seconds during an interview. Memorize the rules, then watch out for the traps that catch even experienced candidates.

Shortcuts

Nested loops → multiply

Two nested loops over the same array of size n = O(n²). Three nested = O(n³). Always multiply depths.

Sorting dominates

If you sort first, the total is at least O(n log n). A linear scan afterward does not change the overall complexity.

Hash map trades space for time

Adding O(n) space via a hash map converts an O(n²) nested-loop search into O(n) with O(1) lookups.

DFS/BFS visits each node once

Graph traversal touches each vertex and edge once: O(V + E). For a tree with n nodes, E = n - 1, so O(n).

Binary search halves input

Anything that halves the search space per step runs in O(log n). This includes binary search on arrays, BST lookup, and divide-and-conquer with one branch.

Push/pop once each → O(n)

Monotonic stacks and queues: each element enters and leaves at most once across the entire loop. Total operations = O(2n) = O(n).

Common Traps

⚠️ String concat in a loop

In many languages, str += char creates a new string each time. Inside a loop of n iterations, each concat copies up to n characters: O(n²) total. Use an array and join instead.

⚠️ Sorting inside a loop

Calling .sort() inside a loop of n iterations means O(n · n log n) = O(n² log n). If you need sorted order repeatedly, sort once outside the loop.

⚠️ Forgetting recursion stack

A recursive function with no explicit data structures still uses the call stack. DFS on a skewed tree is O(n) space, not O(1). Always account for stack depth.

⚠️ Confusing O(n+m) with O(n·m)

Sequential passes over two collections of size n and m: O(n + m). Nested passes: O(n · m). The difference is enormous. Be precise about which you mean.

Section 09

Walkthrough: Deriving Complexity

Let's walk through a complete complexity derivation for a classic problem: finding duplicates in an array. We will analyze three approaches with increasing sophistication, showing the time-space tradeoff in action.

Approach 1: Brute Force (Nested Loops)

function hasDuplicate(nums) {
  for (let i = 0; i < nums.length; i++) {       // n iterations
    for (let j = i + 1; j < nums.length; j++) { // up to n-1 iterations
      if (nums[i] === nums[j]) return true;     // O(1) comparison
    }
  }
  return false;
}
Time
O(n²)
Nested loops: n × (n-1) / 2 comparisons
Space
O(1)
No extra memory allocated

Approach 2: Sort First

function hasDuplicate(nums) {
  nums.sort((a, b) => a - b);                   // O(n log n)
  for (let i = 1; i < nums.length; i++) {       // O(n) scan
    if (nums[i] === nums[i - 1]) return true;   // O(1) comparison
  }
  return false;
}
Time
O(n log n)
Sort O(n log n) + scan O(n) = O(n log n)
Space
O(1)
In-place sort (mutates input)

Approach 3: Hash Set

function hasDuplicate(nums) {
  const seen = new Set();                        // O(n) space worst case
  for (const num of nums) {                      // O(n) iterations
    if (seen.has(num)) return true;              // O(1) amortized lookup
    seen.add(num);                               // O(1) amortized insert
  }
  return false;
}
Time
O(n)
Single pass with O(1) lookups
Space
O(n)
Set grows up to n elements
💡
The time-space tradeoff: Brute force uses no extra memory but is slow. The hash set solution is fast but uses extra memory. Sorting sits in between — better time than brute force, no extra space, but it mutates the input. In interviews, always discuss these tradeoffs. The interviewer wants to see that you understand why one approach is better, not just that it is.
Section 10

Key Takeaways

Keep these five rules at the front of your mind during every interview. They will make complexity analysis feel automatic.

The Five Rules of Complexity Analysis

  • Big O describes growth rate, not exact count.

    O(n) means "linear growth." Whether it is 2n or 100n operations does not matter — the shape of the curve is what counts.

  • Drop constants and lower-order terms.

    O(3n² + 5n + 100) simplifies to O(n²). The quadratic term dominates everything else as n grows large.

  • Nested loops multiply, sequential loops add.

    A loop inside a loop = O(n × m). A loop followed by another loop = O(n + m). The structure of your code maps directly to the math.

  • Space = auxiliary memory that grows with input.

    Count hash maps, sets, arrays, and recursion stack frames. Ignore the input itself. If nothing grows with n, it is O(1) space.

  • When in doubt, count operations for small n.

    Plug in n = 1, 2, 4, 8 and count the iterations. If you see 1, 4, 16, 64 — that is n². If you see 0, 1, 2, 3 — that is log n. The pattern reveals the complexity.