Concurrengine explores the architecture of simultaneous execution. Two threads of thought, two tracks of computation, running in parallel -- synchronized at critical junctures, independent everywhere else.
Every concurrent system begins with a fork -- the moment a single process splits into two. From this point forward, each thread carries its own state, its own instruction pointer, its own view of the world. They share memory but never assumptions.
Both processes reach the same point. State is reconciled. The two become one, momentarily.
When two processes need the same resource, one must wait. The mutex lock is the fundamental primitive of coordination -- a binary gate that ensures only one thread passes at a time. Fairness is not guaranteed; only safety.
Processes communicate through channels -- typed conduits that carry data from sender to receiver. The channel is the wire between two independent minds. Blocking or buffered, the semantics define the rhythm of coordination.
Threads converge. Results merge. The concurrent becomes sequential for one critical moment.
Beneath every concurrent system lies a scheduler -- the arbiter of time slices, the allocator of CPU cycles. It decides who runs, when, and for how long. Preemptive or cooperative, the scheduler shapes the observable behavior of every concurrent program.
When two processes each hold a resource the other needs, neither can proceed. The system freezes. This is the pathology of concurrency -- the moment when parallelism becomes paralysis. Detection is easy; prevention requires discipline.
All threads complete. Resources released. The engine halts gracefully.