Bayesian Inference
Probability updated through evidence. Prior beliefs meet observed data, producing posterior distributions that refine our understanding of uncertain events. The engine recalculates continuously.
Random Variables
Measurable functions mapping outcomes to real numbers. Each variable carries a distribution describing the likelihood of every possible value.
Expected Value
The weighted average of all possible outcomes. A single number summarizing the center of a probability distribution.
Conditional Probability
The likelihood of an event given that another event has occurred. The foundation of dependent reasoning in uncertain systems.
Law of Large Numbers
As sample size grows, the sample mean converges to the expected value. Certainty emerges from accumulated uncertainty.
Independence
Events whose outcomes do not influence each other. The joint probability equals the product of individual probabilities.
Central Limit Theorem
The sum of many independent random variables tends toward a normal distribution, regardless of the original distribution shape. This fundamental theorem explains why the bell curve appears so frequently in nature and measurement. As sample sizes increase, the sampling distribution of the mean becomes increasingly Gaussian, enabling powerful statistical inference techniques.
Markov Chains
A stochastic process where the future depends only on the present state, not the history of past states. Memoryless transitions between discrete states form chains that model weather patterns, stock prices, language generation, and page ranking algorithms. The transition matrix encodes all possible state changes and their associated probabilities.
Monte Carlo Methods
Computational algorithms that use repeated random sampling to obtain numerical results. By simulating thousands of random trials, complex probability distributions are approximated without closed-form solutions. The method transforms intractable integrals into sampling problems, leveraging the law of large numbers for convergence.
Entropy & Information
Shannon entropy quantifies the average information content of a probability distribution. Higher entropy means greater uncertainty and more information gained per observation. The measure connects probability theory to information theory, thermodynamics, and data compression in a single elegant framework.