class LRUCache {
int cap;
LinkedHashMap<Integer, Integer> cache = new LinkedHashMap<>();
LRUCache(int capacity) { cap = capacity; }
int get(int key) {
if (!cache.containsKey(key)) return -1;
int val = cache.remove(key);
cache.put(key, val); // move to end
return val;
}
void put(int key, int value) {
cache.remove(key);
cache.put(key, value);
if (cache.size() > cap)
cache.remove(cache.keySet().iterator().next());
}
} type LRUCache struct {
cap int
cache map[int]*list.Element
order *list.List
}
func (c *LRUCache) Get(key int) int {
if el, ok := c.cache[key]; ok {
c.order.MoveToBack(el)
return el.Value.([2]int)[1]
}
return -1
}
func (c *LRUCache) Put(key, value int) {
if el, ok := c.cache[key]; ok {
c.order.MoveToBack(el)
el.Value = [2]int{key, value}
} else {
c.cache[key] = c.order.PushBack([2]int{key, value})
if c.order.Len() > c.cap {
oldest := c.order.Front()
delete(c.cache, oldest.Value.([2]int)[0])
c.order.Remove(oldest)
}
}
} class LRUCache:
def __init__(self, capacity):
self.cap = capacity
self.cache = OrderedDict()
def get(self, key):
if key not in self.cache:
return -1
self.cache.move_to_end(key)
return self.cache[key]
def put(self, key, value):
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
if len(self.cache) > self.cap:
self.cache.popitem(last=False) use std::collections::HashMap;
struct LRUCache {
cap: usize,
map: HashMap<i32, i32>,
order: Vec<i32>, // use linked list for O(1)
}
impl LRUCache {
fn get(&mut self, key: i32) -> i32 {
if let Some(&val) = self.map.get(&key) {
self.order.retain(|&k| k != key);
self.order.push(key);
val
} else { -1 }
}
fn put(&mut self, key: i32, value: i32) {
self.order.retain(|&k| k != key);
self.order.push(key);
self.map.insert(key, value);
if self.map.len() > self.cap {
let old = self.order.remove(0);
self.map.remove(&old);
}
}
} class LRUCache {
cap: number;
cache = new Map<number, number>();
constructor(capacity: number) { this.cap = capacity; }
get(key: number): number {
if (!this.cache.has(key)) return -1;
const val = this.cache.get(key)!;
this.cache.delete(key);
this.cache.set(key, val); // move to end
return val;
}
put(key: number, value: number): void {
this.cache.delete(key);
this.cache.set(key, value);
if (this.cache.size > this.cap) {
this.cache.delete(this.cache.keys().next().value!);
}
}
} Build custom data structures that combine hash maps, linked lists, and stacks to achieve O(1) operations for complex behaviors.
Design pattern problems ask you to build a custom data structure from scratch that supports a specific set of operations, all in optimal time. The trick is to compose two or more primitive structures (hash map, linked list, stack, array) so their strengths cover each other's weaknesses.
No single data structure gives you O(1) lookup, O(1) insertion, O(1) deletion, and ordering. But if you combine two structures—one for fast lookup (hash map) and one for ordering (doubly-linked list, stack)—you get both.
Why composites? A hash map gives O(1) get/put by key but has no ordering. A linked list gives O(1) insert/remove at known positions but has no random access. Combining them gives O(1) for every operation: look up the node in the map, then move it in the list.
Look for these recognition signals in the problem statement. If you spot one, you are likely facing a data structure design problem.
"Design a data structure" or "Implement class"
"O(1) time for all operations"
"Least recently used" or "eviction policy"
"Get minimum/maximum in O(1)"
The key clue: The problem requires multiple operations (lookup + ordering, push + getMin, insert + getRandom) each in O(1). Ask yourself: "Which two structures, when combined, cover every operation?" That composite is the answer.
The LRU Cache supports get(key) and put(key, value) in O(1). A hash map stores key → node pointers. A doubly-linked list orders nodes from most recently used (head) to least recently used (tail). On every access, we move the node to the head. On eviction, we remove the tail.
Operations: put(1,A), put(2,B), get(1), put(3,C)
Sentinel nodes matter. Using dummy HEAD and TAIL sentinel nodes eliminates all null-checking edge cases. Every real node is always between two other nodes, so remove and insert operations never need special-case logic for empty lists or boundary nodes.
The Min Stack supports push, pop, top, and getMin all in O(1). The key insight: maintain a second stack (or store pairs) that tracks the running minimum at each level. When we push, we also push the new min. When we pop, we also pop the min.
Two stacks side by side: the main stack and the min stack.
Choosing the variant: If the problem needs fast lookup + ordering (LRU, LFU, time-based caches), use hash map + linked list. If the problem needs a standard container with an extra aggregate query (getMin, getMax, getRandom), use augmentation—bolting a parallel structure alongside the main one.
Here are the annotated Java templates for both core variants. Each method is O(1).
class LRUCache { private Map<Integer, Node> map = new HashMap<>(); private Node head, tail; // sentinel nodes private int capacity; public LRUCache(int capacity) { this.capacity = capacity; head = new Node(0, 0); tail = new Node(0, 0); head.next = tail; // head ↔ tail tail.prev = head; } public int get(int key) { if (!map.containsKey(key)) return -1; Node node = map.get(key); remove(node); // detach from current position insertHead(node); // move to front (most recent) return node.val; } public void put(int key, int val) { if (map.containsKey(key)) remove(map.get(key)); Node node = new Node(key, val); map.put(key, node); insertHead(node); if (map.size() > capacity) { // over capacity: evict LRU Node lru = tail.prev; // node just before tail sentinel remove(lru); map.remove(lru.key); } } private void remove(Node n) { // O(1) unlink n.prev.next = n.next; n.next.prev = n.prev; } private void insertHead(Node n) { // O(1) insert after head sentinel n.next = head.next; n.prev = head; head.next.prev = n; head.next = n; } }
type Node struct { key, val int prev, next *Node } type LRUCache struct { capacity int m map[int]*Node head *Node // sentinel nodes tail *Node } func Constructor(capacity int) LRUCache { head := &Node{} tail := &Node{} head.next = tail // head ↔ tail tail.prev = head return LRUCache{capacity, make(map[int]*Node), head, tail} } func (c *LRUCache) Get(key int) int { node, ok := c.m[key] if !ok { return -1 } c.remove(node) // detach from current position c.insertHead(node) // move to front (most recent) return node.val } func (c *LRUCache) Put(key, val int) { if node, ok := c.m[key]; ok { c.remove(node) } node := &Node{key: key, val: val} c.m[key] = node c.insertHead(node) if len(c.m) > c.capacity { // over capacity: evict LRU lru := c.tail.prev // node just before tail sentinel c.remove(lru) delete(c.m, lru.key) } } func (c *LRUCache) remove(n *Node) { // O(1) unlink n.prev.next = n.next n.next.prev = n.prev } func (c *LRUCache) insertHead(n *Node) { // O(1) insert after head n.next = c.head.next n.prev = c.head c.head.next.prev = n c.head.next = n }
class DLLNode: def __init__(self, key: int, val: int): self.key, self.val = key, val self.prev: DLLNode | None = None self.next: DLLNode | None = None class LRUCache: def __init__(self, capacity: int): self.capacity = capacity self.map: dict[int, DLLNode] = {} self.head = DLLNode(0, 0) # sentinel nodes self.tail = DLLNode(0, 0) self.head.next = self.tail # head ↔ tail self.tail.prev = self.head def get(self, key: int) -> int: if key not in self.map: return -1 node = self.map[key] self._remove(node) # detach from current position self._insert_head(node) # move to front (most recent) return node.val def put(self, key: int, val: int): if key in self.map: self._remove(self.map[key]) node = DLLNode(key, val) self.map[key] = node self._insert_head(node) if len(self.map) > self.capacity: lru = self.tail.prev # node before tail sentinel self._remove(lru) del self.map[lru.key] def _remove(self, n: DLLNode): # O(1) unlink n.prev.next = n.next n.next.prev = n.prev def _insert_head(self, n: DLLNode): # O(1) insert after head n.next = self.head.next n.prev = self.head self.head.next.prev = n self.head.next = n
use std::collections::HashMap; use std::collections::linked_list::LinkedList; struct LRUCache { capacity: usize, map: HashMap<i32, (i32, i32)>, // key → (val, order) list: LinkedList<i32>, // keys MRU → LRU clock: i32, } impl LRUCache { pub fn new(capacity: i32) -> Self { LRUCache { capacity: capacity as usize, map: HashMap::new(), list: LinkedList::new(), clock: 0, } } pub fn get(&mut self, key: i32) -> i32 { if let Some(&(mut val, _)) = self.map.get(&key) { self.touch(key, val); // move to front (most recent) val } else { -1 } } pub fn put(&mut self, key: i32, value: i32) { if self.map.contains_key(&key) { self.touch(key, value); // update + move to front return; } if self.map.len() == self.capacity { if let Some(lru_key) = self.list.pop_back() { // evict LRU self.map.remove(&lru_key); } } self.clock += 1; self.map.insert(key, (value, self.clock)); self.list.push_front(key); // insert at head (most recent) } fn touch(&mut self, key: i32, val: i32) { // rebuild list without key, then push to front self.list = self.list.iter() .filter(|&&k| k != key) .cloned() .collect(); self.list.push_front(key); self.map.insert(key, (val, 0)); } }
class DLLNode { key: number; val: number; prev: DLLNode | null = null; next: DLLNode | null = null; constructor(key: number, val: number) { this.key = key; this.val = val; } } class LRUCache { private map = new Map<number, DLLNode>(); private head: DLLNode; // sentinel nodes private tail: DLLNode; private capacity: number; constructor(capacity: number) { this.capacity = capacity; this.head = new DLLNode(0, 0); this.tail = new DLLNode(0, 0); this.head.next = this.tail; // head ↔ tail this.tail.prev = this.head; } get(key: number): number { if (!this.map.has(key)) return -1; const node = this.map.get(key)!; this.remove(node); // detach from current position this.insertHead(node); // move to front (most recent) return node.val; } put(key: number, val: number): void { if (this.map.has(key)) this.remove(this.map.get(key)!); const node = new DLLNode(key, val); this.map.set(key, node); this.insertHead(node); if (this.map.size > this.capacity) { const lru = this.tail.prev!; // node before tail sentinel this.remove(lru); this.map.delete(lru.key); } } private remove(n: DLLNode): void { // O(1) unlink n.prev!.next = n.next; n.next!.prev = n.prev; } private insertHead(n: DLLNode): void { // O(1) insert after head n.next = this.head.next; n.prev = this.head; this.head.next!.prev = n; this.head.next = n; } }
class MinStack { private Deque<Integer> stack = new ArrayDeque<>(); private Deque<Integer> minStack = new ArrayDeque<>(); public void push(int val) { stack.push(val); int curMin = minStack.isEmpty() ? val : Math.min(val, minStack.peek()); minStack.push(curMin); // track min at this level } public void pop() { stack.pop(); minStack.pop(); // always pop in sync } public int top() { return stack.peek(); } public int getMin() { return minStack.peek(); } // O(1) }
type MinStack struct { stack []int minStack []int } func Constructor() MinStack { return MinStack{} } func (s *MinStack) Push(val int) { s.stack = append(s.stack, val) curMin := val if len(s.minStack) > 0 { top := s.minStack[len(s.minStack)-1] if top < curMin { curMin = top } } s.minStack = append(s.minStack, curMin) // track min at this level } func (s *MinStack) Pop() { s.stack = s.stack[:len(s.stack)-1] s.minStack = s.minStack[:len(s.minStack)-1] // always pop in sync } func (s *MinStack) Top() int { return s.stack[len(s.stack)-1] } func (s *MinStack) GetMin() int { return s.minStack[len(s.minStack)-1] // O(1) }
class MinStack: def __init__(self): self.stack: list[int] = [] self.min_stack: list[int] = [] def push(self, val: int) -> None: self.stack.append(val) cur_min = val if not self.min_stack \ else min(val, self.min_stack[-1]) self.min_stack.append(cur_min) # track min at this level def pop(self) -> None: self.stack.pop() self.min_stack.pop() # always pop in sync def top(self) -> int: return self.stack[-1] def get_min(self) -> int: return self.min_stack[-1] # O(1)
struct MinStack { stack: Vec<i32>, min_stack: Vec<i32>, } impl MinStack { pub fn new() -> Self { MinStack { stack: Vec::new(), min_stack: Vec::new() } } pub fn push(&mut self, val: i32) { self.stack.push(val); let cur_min = match self.min_stack.last() { Some(&m) => val.min(m), None => val, }; self.min_stack.push(cur_min); // track min at this level } pub fn pop(&mut self) { self.stack.pop(); self.min_stack.pop(); // always pop in sync } pub fn top(&self) -> i32 { *self.stack.last().unwrap() } pub fn get_min(&self) -> i32 { *self.min_stack.last().unwrap() // O(1) } }
class MinStack { private stack: number[] = []; private minStack: number[] = []; push(val: number): void { this.stack.push(val); const curMin = this.minStack.length === 0 ? val : Math.min(val, this.minStack[this.minStack.length - 1]); this.minStack.push(curMin); // track min at this level } pop(): void { this.stack.pop(); this.minStack.pop(); // always pop in sync } top(): number { return this.stack[this.stack.length - 1]; } getMin(): number { return this.minStack[this.minStack.length - 1]; } }
Sentinel nodes eliminate edge cases. In the LRU Cache, the HEAD and TAIL sentinels mean the list is never truly empty. You never check if (head == null). The remove and insertHead helpers each have exactly 4 pointer assignments—no branches, no special cases.
Let us trace through a full LRU Cache scenario step by step, showing the hash map and linked list side by side at every stage.
Operations: put(1,10), put(2,20), put(3,30), get(2), put(4,40)
| OPERATION | ACTION | LIST (HEAD→TAIL) | MAP SIZE |
|---|---|---|---|
| put(1,10) | Insert at head | [1:10] | 1/3 |
| put(2,20) | Insert at head | [2:20] ↔ [1:10] | 2/3 |
| put(3,30) | Insert at head | [3:30] ↔ [2:20] ↔ [1:10] | 3/3 (full) |
| get(2) | Move to headreturns 20 | [2:20] ↔ [3:30] ↔ [1:10] | 3/3 |
| put(4,40) | Evict key 1, insert 4 at head | [4:40] ↔ [2:20] ↔ [3:30] | 3/3 |
Notice the pattern: Every get and put touches exactly one node in the hash map (O(1) lookup) and performs at most two linked-list operations (remove + insertHead, both O(1) with direct node references). No iteration is ever needed.
These are the classic LeetCode problems that use the data structure design pattern, listed in rough order of difficulty.
Practice tip: Start with #155 Min Stack (direct two-stack pattern) and #705/#706 (understand hash internals). Then tackle #146 LRU Cache (the flagship composite design problem). Once comfortable, try #380 (array + map for O(1) random) and #460 LFU Cache (the ultimate multi-structure composite).
Set a capacity and perform get/put operations on an LRU Cache. Watch the hash map and doubly-linked list update in real time. See eviction happen when the cache exceeds capacity.
LRU Cache: Hash map lookup is O(1). Linked list remove/insert are O(1) because we have a direct pointer to the node (from the map). No traversal is ever needed. Eviction removes the tail's predecessor in O(1).
Min Stack: Each push/pop does exactly two stack operations (one on the main stack, one on the min stack). getMin() just peeks the min stack. All O(1).
Space: Both structures store at most n elements (where n is the capacity for caches, or the number of elements for stacks). The hash map and linked list share the same nodes (not duplicate copies), so space is O(n).
The space-time tradeoff: We use extra space (a second stack or a hash map) specifically to avoid iteration. Every design pattern problem is fundamentally about trading space for time, buying O(1) operations by storing redundant pointers or metadata.
Combine two structures for O(1) everything. No single data structure gives you fast lookup, fast ordering, and fast insert/delete. Composites like hash map + linked list achieve what no solo structure can.
Use sentinel nodes to eliminate edge cases. Dummy head and tail nodes in a doubly-linked list mean the list is never truly empty. Remove and insert operations become simple, branchless 4-pointer reassignments.
Augmentation means "bolt on a parallel tracker." For Min Stack, the min stack mirrors the main stack. For Max Stack, track the max. The pattern generalizes: any aggregate query can be answered in O(1) by maintaining it alongside the primary structure.
Store direct node references in the map. The hash map must point directly to the linked list node (not just the key or value). This is what enables O(1) remove: you skip the traversal and go straight to the node to unlink it.
Identify the operations, then choose the composite. List every required operation and its time constraint. Then ask: "Which primitive structures, when layered together, satisfy all constraints?" LRU = map + list. RandomizedSet = map + array. MinStack = stack + stack. The design follows directly from the requirements.