LRU Cache Mechanism
LRU Cache Mechanism
Problem Description:
Design an LRU (Least Recently Used) cache mechanism. It should support the following operations: get and put. When the cache reaches its capacity, it should invalidate the least recently used item before inserting a new one.
Solution Process:
-
Requirements Analysis
get(key): If the key exists in the cache, retrieve the corresponding value and mark this data entry as recently used.put(key, value): If the key does not exist, write the data. If the cache is full, remove the least recently used data before writing.- Time complexity requirement: Both
getandputoperations should have O(1) time complexity.
-
Data Structure Selection
- Hash Table: Provides O(1) lookup capability but cannot record the order of usage.
- Doubly Linked List: Allows fast insertion at the head and deletion at the tail, and can record access order.
- Combined Solution: Hash Table + Doubly Linked List
- Hash table stores the mapping from key to linked list node.
- Doubly linked list arranges nodes in order of usage (most recently used at the head, least recently used at the tail).
-
Detailed Design
- Linked List Node Definition:
class DLinkedNode: def __init__(self, key=0, value=0): self.key = key self.value = value self.prev = None self.next = None - Cache Class Structure:
class LRUCache: def __init__(self, capacity: int): self.cache = {} # Hash table: key -> node self.capacity = capacity self.size = 0 # Dummy head and tail nodes to simplify edge case handling self.head = DLinkedNode() self.tail = DLinkedNode() self.head.next = self.tail self.tail.prev = self.head
- Linked List Node Definition:
-
Core Operation Implementation
- Add Node to Head (mark as recently used):
def addToHead(self, node): # Insert node after the head node node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node - Remove Node:
def removeNode(self, node): # Remove the specified node from the linked list node.prev.next = node.next node.next.prev = node.prev - Move to Head (remove then add to head):
def moveToHead(self, node): self.removeNode(node) self.addToHead(node) - Remove Tail Node (delete least recently used):
def removeTail(self): node = self.tail.prev self.removeNode(node) return node # Return the removed node for deletion from the hash table
- Add Node to Head (mark as recently used):
-
get Operation Implementation
def get(self, key: int) -> int: if key not in self.cache: return -1 node = self.cache[key] self.moveToHead(node) # Mark as recently used return node.value -
put Operation Implementation
def put(self, key: int, value: int) -> None: if key in self.cache: # Key exists, update value and move to head node = self.cache[key] node.value = value self.moveToHead(node) else: # Create new node node = DLinkedNode(key, value) self.cache[key] = node self.addToHead(node) self.size += 1 # If capacity is exceeded, remove the tail node if self.size > self.capacity: removed = self.removeTail() del self.cache[removed.key] self.size -= 1 -
Complexity Analysis
- Time Complexity: Both
getandputoperations are O(1). - Space Complexity: O(capacity), used by both the hash table and the linked list.
- Time Complexity: Both
-
Practical Application Scenarios
- Browser caching of recently visited pages.
- Operating system memory page replacement.
- Database query caching.
- Cache strategies for in-memory databases like Redis.
This design cleverly combines the fast lookup of a hash table with the order maintenance capability of a doubly linked list, perfectly meeting the functional and performance requirements of an LRU cache.