Open standards for simulation AI. We publish benchmarks, maintain datasets, and coordinate cross-institutional research to advance the field of simulation intelligence.
The definitive benchmark suite for evaluating simulation AI systems. Covers physics accuracy, agent behavior, real-time performance, and scalability across 12 standard scenarios.
| Date | Title | Authors | Venue |
|---|---|---|---|
| 2026-02 | SimBench v3: A Comprehensive Evaluation Framework for Simulation AI | SIMULAI Consortium | AAAI 2026 |
| 2026-01 | OpenPhysics: Standardizing Differentiable Simulation Interfaces | Kim, Park et al. | NeurIPS |
| 2025-11 | Agent Communication Protocols for Multi-Agent Simulation | Chen, Okonkwo | AAMAS |
| 2025-09 | Scalable Digital Twin Frameworks: A Survey and Benchmark | Sharma, Williams | ICML |
| 2025-07 | Toward Unified Evaluation Metrics for Simulation Fidelity | SIMULAI Working Group | JAIR |
Chair: Dr. Sarah Kim
Next meeting: Mar 20, 2026
24 members
Differentiable simulation, contact dynamics, fluid-structure coupling
Chair: Prof. James Chen
Next meeting: Mar 25, 2026
18 members
Multi-agent protocols, communication standards, behavior modeling
Chair: Dr. Priya Sharma
Next meeting: Apr 1, 2026
31 members
Benchmark design, metric standardization, cross-domain evaluation
SIMULAI.ORG is open to researchers, institutions, and industry partners committed to advancing open standards for simulation AI. Membership is free for academic institutions.
Apply for Membership →