#146 - LRU Cache
LRU Cache
- Difficulty: Medium
- Topics: Hash Table, Linked List, Design, Doubly-Linked List
- Link: https://leetcode.com/problems/lru-cache/
Problem Description
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity)Initialize the LRU cache with positive sizecapacity.int get(int key)Return the value of thekeyif the key exists, otherwise return-1.void put(int key, int value)Update the value of thekeyif thekeyexists. Otherwise, add thekey-valuepair to the cache. If the number of keys exceeds thecapacityfrom this operation, evict the least recently used key.
The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
Constraints:
1 <= capacity <= 30000 <= key <= 1040 <= value <= 105- At most
2 * 105calls will be made togetandput.
Solution
1. Problem Deconstruction (500+ words)
Technical Description: Design an LRU (Least Recently Used) cache with fixed capacity that supports two atomic operations:
get(key)- Retrieve value associated with key while updating access recencyput(key, value)- Insert/update key-value pair while maintaining capacity constraint
The data structure must guarantee O(1) time complexity for both operations by combining hash table semantics with ordered access tracking. When capacity is exceeded, the least recently accessed element must be evicted according to the LRU replacement policy.
Beginner Description: Imagine you have a small table that can only hold a limited number of items. Every time you use an item, you move it to the front of the table. When the table is full and you need to add something new, you remove the item at the back of the table (the one nobody has used for the longest time) to make space.
Mathematical Formulation: Let = cache capacity, = key set, = value set
- Operations: ,
- Maintain ordered sequence where
- : if such that , return and move to front
- : if , update and move to front; else add to front, if , remove last element
Constraint Analysis:
1 <= capacity <= 3000: Limits maximum memory usage to ~3000 entries, making O(n) operations potentially acceptable but O(1) required0 <= key <= 10^4: Keys are bounded integers, enabling array-based indexing if needed0 <= value <= 10^5: Values have wide range but don’t affect algorithm design2 * 10^5 total operations: Demands efficient O(1) operations to handle worst-case scenarios- Hidden edge cases: Empty cache gets, duplicate key puts, capacity=1 edge case, consecutive gets on same key
2. Intuition Scaffolding
Real-World Metaphor: Like a library with limited shelf space - popular books stay at the front desk, while rarely borrowed books get moved to storage (evicted) when new arrivals need space.
Gaming Analogy: Inventory system in RPG games - you can only carry limited items. Frequently used weapons stay in quick-access slots, while unused items get automatically discarded when you pick up new loot.
Math Analogy: Maintaining a sliding window over an access sequence where we track recency as a total ordering, with eviction following the extremum principle (remove minimum recency).
Common Pitfalls:
- Array-based approaches: O(n) shifts when updating recency
- Heap/priority queue: Cannot update priorities in O(1)
- Single linked list: Cannot remove arbitrary nodes without previous pointer
- Hash-only solution: Loses ordering information
- Tree structures: O(log n) complexity violates requirements
- Forgetting to update recency on get operations
3. Approach Encyclopedia
Brute Force Approach:
# Pseudocode
class LRUCache:
def __init__(capacity):
self.capacity = capacity
self.cache = {} # key -> value
self.order = [] # keys in access order [most recent ... least recent]
def get(key):
if key not in cache: return -1
# Move to front: O(n) operation
order.remove(key) # O(n) - KILLER!
order.insert(0, key) # O(n)
return cache[key]
def put(key, value):
if key in cache:
cache[key] = value
order.remove(key) # O(n)
order.insert(0, key) # O(n)
else:
if len(cache) == capacity:
lru = order.pop() # Remove last
del cache[lru]
cache[key] = value
order.insert(0, key)
Complexity: Time: O(n) per operation, Space: O(capacity)
Hash Map + Doubly Linked List (Optimal):
Visualization:
Hash Map: {key → Node}
Linked List: Head ↔ Node₁ ↔ Node₂ ↔ ... ↔ Nodeₙ ↔ Tail
(Most Recent) (Least Recent)
Operation Flow:
get(key):
key → Node via HashMap
unlink Node from current position
insert Node after Head
put(key, value):
if exists: update value + move to head
else:
if full:
remove node before Tail (LRU)
remove from HashMap
create new node
insert after Head
add to HashMap
Complexity Proof:
- HashMap operations: O(1) average case
- Doubly linked list insertion/deletion: O(1) with reference
- Total: O(1) average time complexity
- Space: O(capacity) for nodes + O(capacity) for hash map = O(capacity)
4. Code Deep Dive
class ListNode:
def __init__(self, key=0, val=0):
self.key = key # Store key for reverse mapping
self.val = val # The actual value
self.prev = None # Doubly linked for O(1) removal
self.next = None
class LRUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.cache = {} # key -> ListNode mapping
# Dummy head and tail for edge case handling
self.head = ListNode() # Most recent
self.tail = ListNode() # Least recent
self.head.next = self.tail
self.tail.prev = self.head
def _add_node(self, node: ListNode) -> None:
"""Add node right after head (most recent position)"""
# Link node between head and head.next
node.prev = self.head
node.next = self.head.next
# Update surrounding nodes
self.head.next.prev = node
self.head.next = node
def _remove_node(self, node: ListNode) -> None:
"""Remove node from its current position"""
prev_node = node.prev
next_node = node.next
prev_node.next = next_node
next_node.prev = prev_node
def _move_to_head(self, node: ListNode) -> None:
"""Move existing node to most recent position"""
self._remove_node(node) # Unlink from current position
self._add_node(node) # Insert at head
def _pop_tail(self) -> ListNode:
"""Remove and return the least recently used node"""
lru_node = self.tail.prev
self._remove_node(lru_node)
return lru_node
def get(self, key: int) -> int:
# HashMap lookup: O(1) average
node = self.cache.get(key)
if not node:
return -1 # Key not found
# Update recency: O(1) with doubly linked list
self._move_to_head(node)
return node.val
def put(self, key: int, value: int) -> None:
node = self.cache.get(key)
if not node:
# New key insertion
new_node = ListNode(key, value)
self.cache[key] = new_node
self._add_node(new_node)
# Capacity check and eviction
if len(self.cache) > self.capacity:
lru_node = self._pop_tail()
del self.cache[lru_node.key] # Use stored key
else:
# Existing key update
node.val = value
self._move_to_head(node)
Edge Case Handling:
- Empty cache get: HashMap returns None → -1 (line 45-47)
- Capacity = 1: Single node correctly cycles between head and tail
- Duplicate puts: Value update + recency refresh (line 65-67)
- Consecutive gets: Repeated moves to head maintain ordering
5. Complexity War Room
Hardware-Aware Analysis:
- With capacity ≤ 3000, the entire cache fits in L2/L3 cache (~MB range)
- HashMap with ~3000 entries has minimal collisions
- Pointer chasing in linked list is cache-unfriendly but acceptable at this scale
Industry Comparison Table:
| Approach | Time | Space | Readability | Interview Viability |
|---|---|---|---|---|
| Array + Linear Scan | O(n) | O© | 9/10 | ❌ Fails constraints |
| OrderedDict | O(1) | O© | 10/10 | ✅ If allowed |
| HashMap + DLL | O(1) | O© | 7/10 | ✅ Gold standard |
| TreeMap | O(log c) | O© | 8/10 | ❌ Not O(1) |
6. Pro Mode Extras
Variants Section:
- LRU with Expiration: Add TTL to each entry, cleanup during access
- Multi-level LRU: Hot/warm/cold segments with different eviction policies
- LRU-K: Track K most recent accesses for better popularity estimation
- Thread-safe LRU: Add locking mechanisms for concurrent access
Interview Cheat Sheet:
- First words: “I’ll use HashMap for O(1) access and doubly linked list for O(1) recency updates”
- Key insight: “Store key in node for reverse mapping during eviction”
- Common follow-ups: “How would you make it thread-safe? (Reader-writer locks)”
- Testing strategy: “Check capacity boundaries, duplicate operations, access patterns”