CONCURRENT SYSTEMS / TECHNICAL REFERENCE
A thread is the smallest unit of execution schedulable by an operating system. In concurrent systems, multiple threads share the same address space but maintain independent instruction pointers, stack pointers, and register states. The kernel schedules these threads across available processors using preemptive multitasking -- any thread may be suspended at any point to yield processor time to another.
The fundamental challenge emerges from shared mutable state. When two threads read and write the same memory location without coordination, the outcome depends on the precise interleaving of their operations -- a condition known as a data race. Data races are undefined behavior in most systems languages and represent the single most common class of concurrency bug.
Synchronization primitives provide the coordination mechanism. A mutex (mutual exclusion lock) ensures that only one thread can access a critical section at a time. The thread acquires the lock before entering and releases it upon exit. If another thread attempts to acquire an already-held lock, it blocks -- suspending execution until the lock becomes available.
Beyond mutexes, semaphores generalize the concept to allow N concurrent accessors. Condition variables enable threads to wait for specific predicates to become true. Read-write locks distinguish between shared (read) and exclusive (write) access, permitting multiple concurrent readers but only a single writer.
fn critical_section(lock: &Mutex<Data>) { let guard = lock.lock().unwrap(); // exclusive access guaranteed guard.process(); // guard dropped, lock released }
A race condition occurs when the correctness of a program depends on the relative timing of events -- typically the interleaving of thread executions. Unlike data races (which are a specific memory-access violation), race conditions are logical errors: the program may produce different results depending on scheduler decisions that are outside the programmer's control.
Deadlock represents the most severe failure mode of synchronization. It occurs when two or more threads each hold a resource the other needs, and neither will release its held resource until it acquires the needed one. The four Coffman conditions -- mutual exclusion, hold-and-wait, no preemption, and circular wait -- must all hold simultaneously for deadlock to occur. Eliminating any single condition prevents deadlock.
Mutual exclusion is the property that at most one thread may occupy a critical section at any given time. Dijkstra's semaphore (1965) was the first formal synchronization primitive: a non-negative integer with two atomic operations -- P (wait/decrement) and V (signal/increment). When the semaphore value is zero, P blocks the calling thread until another thread signals.
Livelock is a subtler failure: threads remain active but make no progress, continuously changing state in response to each other. Unlike deadlock (where threads are blocked), livelocked threads consume CPU cycles without advancing. Priority inversion occurs when a high-priority thread is indirectly blocked by a low-priority thread holding a needed resource.
Modern systems address these pathologies through lock-free and wait-free algorithms. Compare-and-swap (CAS) operations provide atomic read-modify-write semantics without locks. Lock-free data structures guarantee system-wide progress: at least one thread completes its operation in a finite number of steps, even if other threads are suspended.
1. Mutual Exclusion resource held exclusively 2. Hold and Wait hold one, request another 3. No Preemption cannot force release 4. Circular Wait cyclic dependency chain
Message-passing architecture for distributed concurrent processes