conc.quest
CONCURRENCY EXPLORED
THIS IS A MAP OF PARALLEL WORLDS. EVERY PROCESS RUNS BESIDE ANOTHER. EVERY THREAD SHARES THE SAME CLOCK BUT WALKS A DIFFERENT PATH. CONCURRENCY IS NOT A FEATURE -- IT IS THE ARCHITECTURE OF REALITY ITSELF.
WE BUILT THIS TO SHOW YOU HOW CONCURRENT SYSTEMS THINK. NOT HOW THEY'RE EXPLAINED IN TEXTBOOKS, BUT HOW THEY FEEL WHEN YOU'RE INSIDE THEM: THE RACE CONDITIONS, THE DEADLOCKS, THE BEAUTIFUL MOMENT WHEN PARALLEL PROCESSES SYNCHRONIZE AND THE SYSTEM SINGS.
FLIP EACH CARD. READ THE MACHINE. UNDERSTAND THE QUEST.
PROC.00 :: MANIFESTOPROCESSES
ISOLATED EXECUTION UNITS
THE FUNDAMENTAL UNIT
A process is a program in execution -- an instance of code given its own memory space, its own stack, its own view of the world. Processes are the heavy artillery of concurrency: isolated, protected, and expensive to create. Each process believes it owns the entire machine. The operating system maintains this illusion through virtual memory, scheduling, and context switching.
When processes need to communicate, they reach across the isolation boundary through pipes, sockets, shared memory segments, or message queues. This communication overhead is the price of safety. Process isolation means one crash doesn't bring down the system -- the walls between processes are concrete, not cardboard.
QUEUE.DEPTH=1 :: ISOLATEDTHREADS
SHARED MEMORY WORKERS
LIGHTWEIGHT AND DANGEROUS
Threads are the lightweight cousins of processes. They share memory, share file descriptors, share everything -- and that sharing is both their power and their peril. A thread can read what another thread wrote without ceremony. No pipes, no serialization. Just raw pointer access to shared state.
This intimacy creates speed but demands discipline. Without synchronization, threads corrupt each other's data in ways that are nondeterministic and nearly impossible to debug. The thread model trades isolation for performance, safety for speed. It is the concurrency equivalent of a trust fall -- beautiful when it works, catastrophic when it doesn't.
MUTEX.STATE:UNLOCKED :: SHAREDQUEUES
ORDERED TASK BUFFERS
THE BUFFER BETWEEN CHAOS AND ORDER
A queue is a promise: what goes in first comes out first. In concurrent systems, queues are the shock absorbers between producers and consumers. When one process generates work faster than another can consume it, the queue absorbs the difference. It is a temporal buffer -- storing not just data, but time itself.
Bounded queues create backpressure: when the buffer fills, producers must wait. This waiting is not a bug -- it is flow control. Unbounded queues are a lie: memory is always finite. The art of concurrent system design is choosing the right queue depth -- too shallow and you starve consumers, too deep and you hide latency behind memory consumption.
FIFO :: BOUNDED :: BLOCKINGLOCKS
SYNCHRONIZATION PRIMITIVES
THE GATEKEEPER
A lock is a contract of mutual exclusion. Only one thread may hold it at a time. All others must wait. This is the most fundamental synchronization primitive -- a binary gate that transforms concurrent chaos into sequential access. The mutex, the semaphore, the read-write lock: all variations on the same theme of controlled access.
Locks solve the sharing problem but introduce new dangers. Deadlock: two threads each holding a lock the other needs, frozen forever in mutual waiting. Priority inversion: a high-priority thread blocked on a lock held by a low-priority thread. Lock-free algorithms exist but trade simplicity for correctness proofs that would fill blackboards.
ACQUIRE :: CRITICAL_SECTION :: RELEASECHANNELS
MESSAGE PASSING CONDUITS
DON'T SHARE MEMORY, SHARE MESSAGES
Channels invert the concurrency paradigm. Instead of sharing memory and protecting it with locks, processes send messages through typed conduits. A channel is a pipe with semantics -- it carries structured data from a sender to a receiver, and the act of sending synchronizes the two parties.
Unbuffered channels force perfect synchronization: the sender blocks until the receiver is ready. Buffered channels allow asynchrony up to the buffer limit. Select statements multiplex across multiple channels, waiting for whichever is ready first. This is CSP -- Communicating Sequential Processes -- the mathematical model that proved message passing is as powerful as shared memory, and often safer.
SEND :: RECV :: SELECTSTATE MACHINES
FINITE TRANSITION SYSTEMS
THE SHAPE OF BEHAVIOR
A state machine is a graph of possibilities. Each node is a state; each edge is a transition triggered by an event. At any moment, the machine occupies exactly one state. When an event arrives, the machine follows the matching edge to a new state. This is the simplest possible model of behavior -- and it is sufficient to describe any concurrent system.
In concurrent systems, state machines multiply. Each process has its own state machine. The global state is the product of all individual states -- a combinatorial explosion that makes concurrent systems both powerful and hard to reason about. Model checking tools explore this state space exhaustively, proving that bad states are unreachable. When they succeed, you have mathematical certainty. When they don't, you have a bug report.
TRANSITION :: EVENT :: GUARDEXIT
PROCESS TERMINATED
$ conc.quest --version
v1.0.0 :: CONCURRENCY EXPLORED
$ cat /credits
DESIGNED AS A CARD-FLIP NARRATIVE
MONOCHROME BY CONSTRAINT
ISOMETRIC BY CONVICTION
$ echo $PHILOSOPHY
CONCURRENCY IS NOT A FEATURE.
IT IS THE ARCHITECTURE OF REALITY.
$ exit 0
_
SESSION.END :: CODE=0