// Analyze nested loops
int total = 0;
for (int i = 0; i < n; i++) { // O(n)
for (int j = i; j < n; j++) { // O(n - i)
total++; // O(1)
}
}
// Total: sum(n-i) for i=0..n-1 = O(n^2)
// Drop constants and lower-order terms // Analyze nested loops
total := 0
for i := 0; i < n; i++ { // O(n)
for j := i; j < n; j++ { // O(n - i)
total++ // O(1)
}
}
// Total: sum(n-i) for i=0..n-1 = O(n^2)
// Drop constants and lower-order terms # Analyze nested loops
total = 0
for i in range(n): # O(n)
for j in range(i, n): # O(n - i)
total += 1 # O(1)
# Total: sum(n-i) for i=0..n-1 = O(n^2)
# Drop constants and lower-order terms // Analyze nested loops
let mut total = 0;
for i in 0..n { // O(n)
for _j in i..n { // O(n - i)
total += 1; // O(1)
}
}
// Total: sum(n-i) for i=0..n-1 = O(n^2)
// Drop constants and lower-order terms // Analyze nested loops
let total = 0;
for (let i = 0; i < n; i++) { // O(n)
for (let j = i; j < n; j++) { // O(n - i)
total++; // O(1)
}
}
// Total: sum(n-i) for i=0..n-1 = O(n^2)
// Drop constants and lower-order terms Master Big O notation, derive time and space complexity step by step, and learn the shortcuts that make complexity analysis second nature in interviews.
Big O notation describes how an algorithm's resource usage grows as the input size increases. It strips away constants and low-order terms to focus on the dominant factor. The question it answers: "As n gets very large, what happens to the running time (or memory)?"
There are three related notations. In practice, interviews almost always mean Big O (upper bound):
We say f(n) = O(g(n)) if there exist constants c > 0 and n0 such that for all n ≥ n0:
f(n) ≤ c · g(n)
In plain English: past some threshold, f(n) never grows faster than a constant multiple of g(n). That is why we drop constants — they get absorbed into c. And we drop lower-order terms because the dominant term dwarfs everything else for large n.
Every algorithm falls into one of these growth classes. Memorize this hierarchy — you will reference it in every interview. Listed from fastest to slowest:
To put these in perspective, here is how many operations each complexity produces for n = 1,000:
| Complexity | n = 1,000 | Verdict |
|---|---|---|
| O(1) | 1 | Instant |
| O(log n) | ~10 | Instant |
| O(n) | 1,000 | Fast |
| O(n log n) | ~10,000 | Fast |
| O(n²) | 1,000,000 | Borderline |
| O(2ⁿ) | ~10³°° | Impossible |
| O(n!) | ~10²⁵⁶⁸ | Impossible |
Follow this systematic method to derive the time complexity of any algorithm. These four rules cover the vast majority of interview problems.
Nested loops over the same collection multiply: two nested loops over n items = O(n²). Sequential loops add: O(n) + O(n) = O(n).
Recursive complexity = depth × work per level. A binary tree traversal does O(1) work at each of n nodes = O(n). A branching recursion that splits k ways has O(kⁿ) nodes.
Operations that look O(1) but are actually O(n): string concatenation, array.includes(), array.slice(), array.splice(), and spread operators inside loops.
Sum all phases of the algorithm, then drop lower-order terms. O(n log n) + O(n) = O(n log n). The dominant term wins.
Let's apply these rules to a concrete example — the two-pointer approach for finding a pair that sums to a target in a sorted array:
function twoSum(nums, target) {
let left = 0, right = nums.length - 1; // O(1)
while (left < right) { // Runs at most n times
const sum = nums[left] + nums[right]; // O(1) per iteration
if (sum === target) return [left, right];
if (sum < target) left++;
else right--;
}
return [];
}
Analysis: One while loop. Each iteration advances at least one pointer. The pointers start n apart and converge, so the loop runs at most n times. Each iteration does O(1) work. Total: O(n) time, O(1) space.
Space complexity measures how much extra memory grows with input size. The input itself usually does not count — only the auxiliary memory your algorithm allocates.
Ask yourself: "What new data structures or stack frames grow as n increases?"
arr.slice() creates a new array of up to O(n) size.Amortized analysis looks at the average cost per operation over a sequence of operations, even if individual operations occasionally spike. Average case looks at the expected cost over all possible inputs.
When you .push() onto a dynamic array and it runs out of capacity, the array doubles in size — copying all n elements. That single resize is O(n). But it only happens every n pushes.
Push 1 → capacity 1 (copy 0 elements) Push 2 → capacity 2 (copy 1 element) Push 3 → capacity 4 (copy 2 elements) Push 5 → capacity 8 (copy 4 elements) Push 9 → capacity 16 (copy 8 elements) ... Total copies for n pushes: 1+2+4+8+...+n = 2n - 1
Total work for n pushes: n (for the pushes themselves) + 2n - 1 (for copies) = O(n) total. Divide by n operations: O(1) amortized per push.
Other amortized O(1) operations you should know:
Recursive algorithms produce recurrence relations of the form T(n) = aT(n/b) + O(n^d), where a is the number of subproblems, n/b is each subproblem's size, and O(n^d) is the work done outside the recursive calls.
For T(n) = aT(n/b) + O(n^d), compare d to log_b(a):
d < log_b(a) → O(n^(log_b(a))) — recursion dominates
d = log_b(a) → O(n^d · log n) — balanced
d > log_b(a) → O(n^d) — top-level work dominates
Here are the most common recurrences you will encounter in interviews:
Merge Sort. Two halves, linear merge. d = 1, log_2(2) = 1, so d = log_b(a).
Binary Search. One half, constant work. d = 0, log_2(1) = 0, so d = log_b(a).
Quickselect (avg). One half, linear partition. d = 1 > log_2(1) = 0, so top-level dominates.
Karatsuba multiply. Three subproblems of half size. d = 1 < log_2(3) ≈ 1.58.
A complete reference table for all 27 algorithm patterns. The Brute Force column shows the naive approach; Optimal shows what the pattern achieves.
| Pattern | Brute Force | Optimal Time | Optimal Space | Key Insight |
|---|---|---|---|---|
| Two Pointers | O(n²) | O(n) | O(1) | Sorted + pair → two ptrs |
| Sliding Window | O(n·k) | O(n) | O(k) | Expand/shrink window |
| Binary Search | O(n) | O(log n) | O(1) | Halves search space |
| Dynamic Programming | O(2ⁿ) | O(n·m) | O(n·m) | Cache subproblems |
| Backtracking | O(nⁿ) | O(n!) | O(n) | Prune early |
| Tree Traversal | — | O(n) | O(h) | Visit each node once |
| Monotonic Stack | O(n²) | O(n) | O(n) | Push/pop once each |
| In-Place Reversal | O(n)+O(n) | O(n) | O(1) | Re-wire pointers |
| Fast & Slow Ptrs | O(n)+O(n) | O(n) | O(1) | Cycle detection |
| Prefix Sum | O(n·q) | O(n+q) | O(n) | Precompute sums |
| Matrix Traversal | — | O(m·n) | O(m·n) | BFS/DFS on grid |
| DFS & BFS | — | O(V+E) | O(V) | Visit each vertex once |
| Overlapping Intervals | O(n²) | O(n log n) | O(n) | Sort by start |
| Top K Elements | O(n log n) | O(n log k) | O(k) | Min-heap of size k |
| Trie | O(N·L) | O(L) per op | O(N·L) | One node per char |
| Union-Find | O(n²) | O(α(n)) per op | O(n) | Path compression |
| Topological Sort | O(V·E) | O(V+E) | O(V+E) | Kahn's BFS |
| Greedy | O(2ⁿ) | O(n log n) | O(1) | Sort + single pass |
| Bit Manipulation | O(n log n) | O(n) | O(1) | XOR cancels pairs |
| Monotonic Queue | O(n·k) | O(n) | O(k) | Deque max/min |
| Two Heaps | O(n²) | O(n log n) | O(n) | Max-heap + min-heap |
| Cyclic Sort | O(n log n) | O(n) | O(1) | Swap to correct index |
| Subsets & Combos | O(n·2ⁿ) | O(2ⁿ) | O(2ⁿ) | Inherent output size |
| Linked List Techniques | O(n)+O(n) | O(n) | O(1) | Dummy head + pointers |
| Design Patterns | O(n) per op | O(1) per op | O(n) | Combine structures |
| Segment Tree | O(n·q) | O(n + q log n) | O(n) | Range query tree |
| String Matching | O(n·m) | O(n+m) | O(m) | Failure function |
These shortcuts let you derive complexity in seconds during an interview. Memorize the rules, then watch out for the traps that catch even experienced candidates.
Two nested loops over the same array of size n = O(n²). Three nested = O(n³). Always multiply depths.
If you sort first, the total is at least O(n log n). A linear scan afterward does not change the overall complexity.
Adding O(n) space via a hash map converts an O(n²) nested-loop search into O(n) with O(1) lookups.
Graph traversal touches each vertex and edge once: O(V + E). For a tree with n nodes, E = n - 1, so O(n).
Anything that halves the search space per step runs in O(log n). This includes binary search on arrays, BST lookup, and divide-and-conquer with one branch.
Monotonic stacks and queues: each element enters and leaves at most once across the entire loop. Total operations = O(2n) = O(n).
In many languages, str += char creates a new string each time. Inside a loop of n iterations, each concat copies up to n characters: O(n²) total. Use an array and join instead.
Calling .sort() inside a loop of n iterations means O(n · n log n) = O(n² log n). If you need sorted order repeatedly, sort once outside the loop.
A recursive function with no explicit data structures still uses the call stack. DFS on a skewed tree is O(n) space, not O(1). Always account for stack depth.
Sequential passes over two collections of size n and m: O(n + m). Nested passes: O(n · m). The difference is enormous. Be precise about which you mean.
Let's walk through a complete complexity derivation for a classic problem: finding duplicates in an array. We will analyze three approaches with increasing sophistication, showing the time-space tradeoff in action.
function hasDuplicate(nums) {
for (let i = 0; i < nums.length; i++) { // n iterations
for (let j = i + 1; j < nums.length; j++) { // up to n-1 iterations
if (nums[i] === nums[j]) return true; // O(1) comparison
}
}
return false;
}
function hasDuplicate(nums) {
nums.sort((a, b) => a - b); // O(n log n)
for (let i = 1; i < nums.length; i++) { // O(n) scan
if (nums[i] === nums[i - 1]) return true; // O(1) comparison
}
return false;
}
function hasDuplicate(nums) {
const seen = new Set(); // O(n) space worst case
for (const num of nums) { // O(n) iterations
if (seen.has(num)) return true; // O(1) amortized lookup
seen.add(num); // O(1) amortized insert
}
return false;
}
Keep these five rules at the front of your mind during every interview. They will make complexity analysis feel automatic.
O(n) means "linear growth." Whether it is 2n or 100n operations does not matter — the shape of the curve is what counts.
O(3n² + 5n + 100) simplifies to O(n²). The quadratic term dominates everything else as n grows large.
A loop inside a loop = O(n × m). A loop followed by another loop = O(n + m). The structure of your code maps directly to the math.
Count hash maps, sets, arrays, and recursion stack frames. Ignore the input itself. If nothing grows with n, it is O(1) space.
Plug in n = 1, 2, 4, 8 and count the iterations. If you see 1, 4, 16, 64 — that is n². If you see 0, 1, 2, 3 — that is log n. The pattern reveals the complexity.