Preamble
The 2020–2022 graph posts on this blog—DFS, BFS, Dijkstra, grid islands, word-ladder style searches, N-Queens backtracking—were exercises in state exploration with clear invariants. November pulls them into one narrative: what each technique buys you, when it misleads you, and how that thinking echoes systems work like scheduling and search over configuration spaces.
DFS: depth-first exploration
DFS walks until it hits a wall, then backtracks. It is natural for exhaustive search, connectivity checks, and puzzles with deep decision chains. It is not automatically the shortest path unless the problem structure guarantees it—knowing why saves interview time and production bugs.
BFS: layers and shortest path in unweighted graphs
BFS expands layer by layer, yielding shortest hop counts when edges have uniform cost. Grid problems and social graphs love BFS when “minimum steps” is the metric.
Dijkstra: non-negative weighted shortest paths
When weights appear, Dijkstra (with a priority queue) is the conservative default for non-negative edges. I still narrate relaxation aloud while coding—if I cannot explain the invariant, I should not trust the heap updates.
Backtracking: constraint propagation with undo
N-Queens and similar puzzles are backtracking with pruning. The pattern is choose, recurse, undo—the same shape as exploratory configuration changes with rollback in operational tooling (metaphorically, not literally identical code).
Study loop that stuck for me
Implement each pattern twice—I used Python and Java in the original series—timed against randomized inputs. Explaining invariants out loud catches fence-post errors faster than staring at IDE highlights.
Mini implementations side by side (same graph API)
Assume adj: Dict[int, List[Tuple[int, int]]] maps node → [(neighbor, weight), …]; for BFS/DFS, ignore weights or set them to 1.
BFS (shortest hops, unweighted):
from collections import deque
def bfs(adj, start, target):
q = deque([start])
prev = {start: None}
while q:
u = q.popleft()
if u == target:
break
for v, _w in adj.get(u, []):
if v not in prev:
prev[v] = u
q.append(v)
return prev # reconstruct path from prev
Dijkstra (non-negative weights):
import heapq
def dijkstra(adj, start):
dist = {start: 0}
pq = [(0, start)]
while pq:
d, u = heapq.heappop(pq)
if d != dist.get(u, float("inf")):
continue
for v, w in adj.get(u, []):
nd = d + w
if nd < dist.get(v, float("inf")):
dist[v] = nd
heapq.heappush(pq, (nd, v))
return dist
DFS (connectivity / exhaustive search):
def dfs(adj, start, seen=None):
if seen is None:
seen = set()
seen.add(start)
for v, _w in adj.get(start, []):
if v not in seen:
dfs(adj, v, seen)
return seen
When to use which (decision table):
| Goal | Algorithm | Invariant you should be able to say aloud |
|---|---|---|
| Min hops, unweighted | BFS | Nodes dequeued in nondecreasing distance from start |
| Min cost, edges ≥ 0 | Dijkstra | dist[u] is final when u is popped |
| Explore all / detect cycle | DFS | seen partitions visited vs not |
Port the same three functions to Java (ArrayDeque, PriorityQueue) to mirror the 2020–2022 posts—identical inputs should yield identical shortest-path distances.
Conclusion
Algorithm fluency and systems fluency both reward careful state management. Debugging Concurrent Systems: Books and Practices ties debugging and concurrency reading to how you hold graph and systems work in your head.