Yesang

예상 — Prediction

Improving how the world makes predictions.

Research

Advancing forecasting methodology through rigorous scientific inquiry and meta-analysis.

Education

Teaching calibrated reasoning to individuals, organizations, and public institutions.

Standards

Developing open standards for prediction accuracy measurement and reporting.

Featured Research

Aggregation Methods for Crowd Forecasts: Weighted vs. Unweighted Consensus

Kim, Okonkwo · January 2026

Comparing performance of extremized aggregation, track-record weighting, and simple median methods across 12 prediction tournaments. Weighted methods outperform unweighted by 8-15% on Brier scores.

The Role of Base Rate Neglect in Geopolitical Forecasting

Jensen, Park · November 2025

An experimental study of 2,400 forecasters examining how base rate information presentation affects prediction accuracy for rare geopolitical events.

Open Standards for Prediction Market Reporting: A Framework Proposal

Nakamura, Chen, Okonkwo · September 2025

Proposing a standardized reporting framework for prediction markets that enables cross-platform accuracy comparisons and longitudinal tracking of forecasting quality.

Research Methodology

Our research follows rigorous standards for forecasting science. Explore our core methodological approaches below.

Calibration Measurement

We measure calibration using Brier scores and reliability diagrams. A perfectly calibrated forecaster assigns probability X% to events that occur X% of the time. Our calibration assessment protocol involves a minimum of 200 resolved predictions per forecaster, stratified across at least 5 distinct domains.

Forecast Aggregation

We employ multiple aggregation strategies including extremized means, trimmed means, and track-record-weighted averages. Each method is evaluated against a holdout set of resolved questions. Our research consistently shows that aggregation methods outperform individual forecasters by 15-30%.

Prediction Tournament Design

Our prediction tournaments follow a pre-registered protocol with clearly defined resolution criteria, minimum participation thresholds, and standardized scoring. Questions span a minimum 90-day resolution window and cover at least 4 domain categories to ensure generalizability of results.

Debiasing Interventions

We study the effectiveness of structured debiasing techniques including consider-the-opposite prompts, reference class forecasting, and pre-mortem analysis. Our training programs incorporate these techniques based on evidence from controlled experiments with over 5,000 participants.

Prediction Accuracy Track Record

Transparent reporting of our organizational forecasting performance across domains and time periods.

Domain Questions Brier Score Accuracy
Geopolitics 342 0.148
87%
Economics 289 0.162
84%
Technology 198 0.131
91%
Science 156 0.142
89%
Public Health 124 0.155
85%

Our Programs

Research Program

Our research program conducts original studies on forecasting methodology, calibration science, and prediction market design. We publish open-access papers and maintain a public data repository of forecast results for independent verification.

Publications (2025)
14
Citations
1,247

Education Program

We offer workshops, online courses, and institutional training programs that teach calibrated reasoning. Our curriculum covers base rate estimation, probability assessment, cognitive debiasing, and structured analytic techniques.

Participants Trained
12,400
Partner Organizations
38

Standards Program

We develop and promote open standards for prediction accuracy measurement, reporting, and cross-platform comparison. Our standards are adopted by prediction markets, intelligence agencies, and academic institutions worldwide.

Adopting Institutions
24
Standard Versions
3

2025 Annual Impact

0 Published Studies
0 Trained Forecasters
0 Calibration Score
0 Resolved Predictions
0 Partner Organizations
0 Countries Represented

Team

SP

Dr. Seo-yeon Park

Executive Director

Ph.D. Decision Science, Stanford University. Former senior analyst at IARPA’s forecasting program.

EJ

Erik Jensen

Head of Research

Ph.D. Statistics, ETH Zürich. Published 23 papers on probabilistic reasoning and calibration.

AN

Dr. Aiko Nakamura

Calibration Science Lead

Ph.D. Cognitive Psychology, University of Tokyo. Expert in debiasing and judgment under uncertainty.

JK

Dr. Ji-hoon Kim

Education Director

Ed.D. Learning Design, Columbia University. Designed curricula for 30+ institutional partners.

CO

Chioma Okonkwo

Standards Lead

M.S. Information Science, UC Berkeley. Led development of the Open Prediction Reporting Standard.

LC

Liang Chen

Data Scientist

M.S. Machine Learning, CMU. Builds aggregation models and maintains the public forecast dataset.

Test Your Calibration

How well-calibrated are your predictions? Answer the questions below and rate your confidence. A well-calibrated person should be right about as often as their confidence suggests.

The population of South Korea exceeded 52 million in the 2020 census.

The Brier score ranges from 0 (perfect) to 2 (worst possible).

Philip Tetlock’s “superforecasters” outperformed intelligence analysts with classified data.

The first prediction market (Iowa Electronic Markets) was established in 1988.

Averaging forecasts from a group almost always outperforms the average individual forecaster.

Upcoming Events

Apr 18

Calibration Workshop: Foundations

Online · 2:00 PM EST · Free

Introduction to calibration training for new forecasters. Learn base rate estimation and probability assessment techniques.

May 06

Annual Forecasting Conference 2026

Seoul, South Korea · May 6–8 · Registration Open

Three-day conference featuring research presentations, forecasting tournaments, and workshops on prediction methodology.

Jun 12

Standards Committee Meeting

Online · 10:00 AM EST · Open to Partners

Quarterly review of the Open Prediction Reporting Standard v3.1 draft. Public comment period discussion.