CONC ENGINE

Concurrent Event Simulation Engine

Orchestrating parallel execution across distributed event streams.

Process Architecture · Thread Timeline · Deep Documentation

Process Architecture

The concurrent event engine operates on a fork-join execution model. Each incoming event spawns a lightweight thread that traverses a directed acyclic graph of handlers. When a handler requires parallel decomposition, it forks the current thread into N child threads — each pursuing an independent execution path through the handler graph.

Thread scheduling is non-preemptive. Once a thread acquires a handler lock, it executes to completion or to the next fork point — whichever comes first. This guarantees deterministic ordering within a single execution branch while permitting genuine concurrency across branches.

event_in fork() thread_α thread_β

Event Dispatch Table

fork(event_id: u64) → thread_α: handler_graph[0..n] → thread_β: handler_graph[n..m] join(thread_α, thread_β) → merged_state: EventResult
awaiting merge...

Conflict Resolution

When two threads attempt concurrent writes to the same state region, a conflict is detected at the merge point. The engine employs a last-writer-wins strategy with causal ordering — the thread whose fork point has a higher Lamport timestamp takes precedence.

thread_γ thread_δ conflict!

Thread Timeline

t₀ t₁ t₂ t₃ t₄ t₅ t₆ fork → ← join conflict

Execution Guarantees

The engine provides exactly-once delivery semantics within a single simulation epoch. Events are deduplicated at the ingestion boundary using content-addressed hashing — identical events arriving from multiple sources are collapsed into a single thread root. The Lamport clock ensures causal consistency across all fork-join boundaries.

State Isolation Model

Each thread operates on a copy-on-write snapshot of the global state. Writes are buffered in a thread-local journal until the join point, where the engine performs a three-way merge between the parent state, thread_α journal, and thread_β journal. Conflicts are resolved by timestamp ordering.

snapshot(global_state) → local_copy execute(handler_graph, local_copy) commit(journal) → merge_queue
journal_α journal_β merge() committed

The merge operation is atomic. If any conflict cannot be resolved deterministically, the entire epoch is rolled back and re-executed with conflict-avoidance hints injected into the fork scheduler.

Throughput Characteristics

Under nominal load, the engine sustains 2.4M events/sec on a 16-core topology with 99.97% fork-join completion rate. Thread contention peaks at 12-14% under adversarial workloads — the scheduler adaptively reduces fork depth when contention exceeds the configurable threshold.

concengine — concurrent event simulation at depth. Every thread is a hypothesis. Every join is a proof.