Queues: First In, First Out
Understanding queues, the FIFO data structure behind task scheduling, breadth-first search, message passing, and buffering systems.
Terminology
- FIFO (First In, First Out)
- An ordering principle where the earliest added element is the first one removed. Queues follow FIFO ordering.
- Enqueue
- The operation that adds an element to the back (tail) of the queue.
- Dequeue
- The operation that removes and returns the element at the front (head) of the queue.
- Peek (also called Front)
- Returns the front element without removing it.
- Circular queue (ring buffer)
- A fixed-size array-based queue where the head and tail indices wrap around using modular arithmetic, avoiding the need to shift elements.
- Deque (double-ended queue)
- A generalized queue that supports insertion and removal at both the front and the back in $O(1)$ time.
- Modular arithmetic
- Arithmetic where values wrap around after reaching a fixed number (the modulus). Used in circular queues:
index = (index + 1) % capacity. - Amortized $O(1)$
- An operation that is $O(1)$ on average over a sequence of operations, even though individual operations may occasionally cost more (e.g. when a dynamic array resizes).
- BFS (Breadth-First Search)
- A graph traversal algorithm that explores all neighbors at the current depth before moving to the next level. BFS uses a queue to track which node to visit next.
What & Why
A queue is a linear data structure that follows the FIFO principle: the first element enqueued is the first one dequeued. Think of a line at a coffee shop: the person who arrives first gets served first.
Why does this matter? Queues model any process where order of arrival determines order of service. Operating systems use queues to schedule processes and I/O requests. Network routers buffer packets in queues. Web servers queue incoming requests. BFS, one of the most important graph algorithms, is built on a queue. Message brokers like Kafka and RabbitMQ are fundamentally queue systems.
The interface is minimal: enqueue, dequeue, peek. By restricting insertion to one end and removal to the other, you get a data structure that naturally models fairness, ordering, and buffering.
Key properties:
- FIFO ordering: the earliest added element is always the first removed
- O(1) enqueue, dequeue, and peek: all core operations are constant time
- Restricted access: insert at the back, remove from the front
- Multiple implementations: linked-list-backed, circular array, and deque variants each offer different trade-offs
How It Works
The Queue Model
A queue exposes three core operations. enqueue(x) places element $x$ at the back. dequeue() removes and returns the front element. peek() returns the front element without removing it. All three run in $O(1)$ time.
Linked-List-Backed Queue
The simplest correct implementation uses a singly linked list with a head pointer (front) and a tail pointer (back). Enqueue appends a new node at the tail. Dequeue removes the node at the head. Both operations are $O(1)$ with no amortization needed.
The trade-off is the same as with linked-list stacks: each node carries pointer overhead, and scattered heap allocations hurt cache locality.
Circular Queue (Ring Buffer)
A circular queue uses a fixed-size array with two indices, head and tail, that wrap around using modular arithmetic. When either index reaches the end of the array, it wraps back to index 0.
This avoids the problem with naive array queues where dequeuing from the front leaves wasted space. Instead of shifting all elements left (which costs $O(n)$), the head index simply advances:
The key design question is how to distinguish a full queue from an empty one, since both could have head == tail. Two common strategies:
- Waste one slot: the queue is full when
(tail + 1) % capacity == head. This sacrifices one slot but keeps the logic simple. - Track a count: maintain a separate
sizevariable. Full whensize == capacity, empty whensize == 0.
Deque (Double-Ended Queue)
A deque generalizes the queue by allowing insertion and removal at both ends. It supports four operations: pushFront, pushBack, popFront, and popBack, all in $O(1)$ time.
Deques are typically implemented as circular arrays (like Java's ArrayDeque and Python's collections.deque). They can serve as both a stack (push/pop from the same end) and a queue (push at one end, pop from the other), making them a versatile default choice.
A common use case is the sliding window maximum problem: maintain a deque of indices where the front always holds the index of the current maximum. As the window slides, pop from the back any elements smaller than the new entry, and pop from the front any elements that have left the window.
Complexity Analysis
| Operation | Linked-list | Circular array | Notes |
|---|---|---|---|
| enqueue(x) | $O(1)$ | $O(1)$ amortized | Circular array may resize if dynamic |
| dequeue() | $O(1)$ | $O(1)$ | Advance head pointer or index |
| peek() | $O(1)$ | $O(1)$ | Read without removal |
| isEmpty() | $O(1)$ | $O(1)$ | Check size or head == tail |
| Search | $O(n)$ | $O(n)$ | Must traverse to find |
Space Complexity
A linked-list queue holding $n$ elements uses $O(n)$ space, with per-node pointer overhead:
A circular array queue with capacity $c$ uses $O(c)$ space. If the array is dynamically resizable (doubling strategy), the allocated capacity is at most $2n$:
Fixed-size circular queues are common in embedded systems and kernel buffers where predictable memory usage matters more than flexibility.
Naive Array Queue vs. Circular Queue
A naive array queue that dequeues by shifting all elements left costs $O(n)$ per dequeue. Over $n$ dequeue operations, that's $O(n^2)$ total work. A circular queue avoids this entirely:
Implementation
Circular Queue (Pseudocode)
STRUCTURE CircularQueue:
data ← array of size CAPACITY
head ← 0
tail ← 0
size ← 0
FUNCTION enqueue(queue, value):
IF queue.size = CAPACITY THEN ERROR "Queue overflow"
queue.data[queue.tail] ← value
queue.tail ← (queue.tail + 1) MOD CAPACITY
queue.size ← queue.size + 1
FUNCTION dequeue(queue):
IF queue.size = 0 THEN ERROR "Queue underflow"
value ← queue.data[queue.head]
queue.head ← (queue.head + 1) MOD CAPACITY
queue.size ← queue.size - 1
RETURN value
FUNCTION peek(queue):
IF queue.size = 0 THEN ERROR "Queue is empty"
RETURN queue.data[queue.head]
FUNCTION isEmpty(queue):
RETURN queue.size = 0
Linked-List Queue (Pseudocode)
STRUCTURE Node:
value
next ← null
STRUCTURE LinkedQueue:
head ← null
tail ← null
size ← 0
FUNCTION enqueue(queue, value):
node ← new Node(value)
IF queue.tail ≠ null THEN
queue.tail.next ← node
queue.tail ← node
IF queue.head = null THEN
queue.head ← node
queue.size ← queue.size + 1
FUNCTION dequeue(queue):
IF queue.head = null THEN ERROR "Queue underflow"
value ← queue.head.value
queue.head ← queue.head.next
IF queue.head = null THEN
queue.tail ← null
queue.size ← queue.size - 1
RETURN value
BFS Using a Queue (Pseudocode)
FUNCTION bfs(graph, startNode):
visited ← empty set
queue ← new LinkedQueue
enqueue(queue, startNode)
ADD startNode TO visited
WHILE NOT isEmpty(queue):
current ← dequeue(queue)
PROCESS current
FOR EACH neighbor IN graph.neighbors(current):
IF neighbor NOT IN visited THEN
ADD neighbor TO visited
enqueue(queue, neighbor)
Real-World Applications
- BFS and level-order traversal: breadth-first search uses a queue to explore graph nodes level by level. This is how shortest-path algorithms work on unweighted graphs, and how tree level-order traversal is implemented.
- Task scheduling: operating systems use ready queues to schedule processes. CPU schedulers like round-robin cycle through a queue of runnable processes, giving each a time slice.
- Message queues: systems like Kafka, RabbitMQ, and SQS are built around the queue abstraction. Producers enqueue messages, consumers dequeue them, decoupling services and smoothing out load spikes.
- Print spooling: print jobs are queued in FIFO order. The printer processes them one at a time in the order they were submitted.
- Network packet buffering: routers and network interfaces buffer incoming packets in queues when they arrive faster than they can be processed. This is where terms like "queue depth" and "buffer bloat" come from.
- Sliding window algorithms: deques power efficient sliding window solutions. The sliding window maximum problem, common in streaming data and interview questions, uses a deque to maintain candidates in $O(n)$ total time.
Key Takeaways
- A queue is a FIFO data structure with three core operations: enqueue, dequeue, and peek, all running in $O(1)$ time
- Circular queues (ring buffers) solve the wasted-space problem of naive array queues by wrapping indices with modular arithmetic
- Deques generalize queues to support insertion and removal at both ends, making them usable as both stacks and queues
- BFS, the foundational graph traversal algorithm, depends on a queue to process nodes in level order
- When you see a problem that requires "process in arrival order" or "level-by-level exploration," reach for a queue