_
where threads converge

[T0:RUNNING] The Fork

Every concurrent system begins with a fork -- a single thread of execution that splits into many. The operating system allocates a stack, assigns a thread ID, and surrenders control to the scheduler. From this moment, the thread exists in a superposition of states: it is simultaneously ready to run and waiting to be scheduled.

[T0:RUNNING] Shared Memory

Threads share an address space. This is both their power and their peril. A value written by one thread is visible to all others -- eventually. The word "eventually" carries the weight of entire systems. Between the write and the visibility lies the memory model: cache coherence protocols, store buffers, fence instructions. Concurrency forces you to question what "now" means.

[T0:RUNNING] The Scheduler's Dilemma

The scheduler is omniscient but not omnipotent. It sees every thread, knows every priority, measures every quantum. Yet it cannot prevent a high-priority thread from spinning on a lock held by a low-priority one. Priority inversion -- the scheduler's nightmare -- is a reminder that concurrency resists central planning.

[T0:WAITING] The Race

A race condition is not a bug you find. It is a bug that finds you -- intermittently, unpredictably, under load, in production, at 3am. Two threads reach for the same memory location. One reads a stale value. The other overwrites before the first can react. The result is corruption so subtle it may take weeks to surface.

[BLOCKED]

Thread A holds Lock 1, waiting for Lock 2

[BLOCKED]

Thread B holds Lock 2, waiting for Lock 1

Click a card to resolve the deadlock
concurrency is not parallelism

[T0:RUNNING] Coordination

Synchronization is the price of shared state. Every mutex acquisition is a contract: "I will hold this lock for the minimum duration necessary, and I will release it even if I fail." The discipline of concurrent programming is the discipline of honoring these contracts under duress -- when the system is under load, when exceptions fire, when the unexpected becomes the norm.

[T0:RUNNING] The Barrier

A barrier is a meeting point. All threads must arrive before any may proceed. It is the most egalitarian primitive in concurrency: no thread is more important than another at the barrier. The fastest waits for the slowest. The barrier teaches patience to processes that know only speed.

[T0:COMPLETE] All threads joined.

The work is done. Every lock has been released, every barrier passed, every channel closed. The threads converge one final time -- not at a synchronization point, but at termination. The operating system reclaims their stacks. The scheduler removes them from its queues. What remains is the result: a value computed by many, owned by none.

_
// spawn.rs
fn main() {
    let handle = thread::spawn(|| {
        // new thread begins
    });
}
// shared memory
let data = Arc::new(Mutex::new(0));
let d = Arc::clone(&data);
thread::spawn(move || {
    let mut val = d.lock().unwrap();
    *val += 1;
});
// race condition
// UNSAFE: no synchronization
static mut COUNTER: i32 = 0;
// Thread A: COUNTER += 1;
// Thread B: COUNTER += 1;
// Result: undefined
// mutex acquire
let guard = mutex.lock();
// critical section
// only one thread here
drop(guard); // release
// barrier sync
let barrier = Arc::new(Barrier::new(4));
// all 4 threads must arrive
barrier.wait();
// proceed together
// join
for handle in handles {
    handle.join().unwrap();
}
// all threads complete
TIMELINE