Threads
Parallel Ridgelines
Threads are the fundamental geological force of concurrency -- independent flows of execution that carve parallel ridges across the computational landscape. Like watercourses running side by side through soft terrain, each thread follows its own path while sharing the same bedrock of memory.
A thread is a sequence of instructions that can be scheduled independently by the operating system. Multiple threads within a single process share the same address space, file descriptors, and heap memory, but each maintains its own stack and register state. This shared-but-separate nature is what gives threads their power -- and their danger.
std::thread::spawn(|| { /* parallel ridge */ })
The topography of thread execution reveals itself in timing diagrams: parallel ridges of activity separated by valleys of waiting. When threads are well-designed, their ridges run cleanly parallel, never intersecting. When poorly designed, the ridges converge into chaotic terrain -- race conditions, data corruption, undefined behavior.
Thread creation is a geological event: the operating system allocates a new stack (typically 2-8MB), assigns scheduling priority, and begins execution at the specified entry point. The cost of this creation is non-trivial -- roughly 10-50 microseconds on modern systems -- which is why thread pools exist as pre-carved channels ready to accept work.