A cool evening in Basel, at the end of the 17th century. In a study, Jakob Bernoulli sits hunched over columns of numbers. Before him lie records of coin tosses – hundreds, perhaps thousands of trials. At first, the results fluctuate wildly: more heads, then more tails. But the longer the series goes on, the more the ratio seems to settle into a steady pattern. It almost seems as if a hidden order is revealing itself within the chaos.
This observation led Bernoulli to one of the most influential insights in probability theory: the law of large numbers. It states that as the number of repetitions increases, the relative frequencies become more stable and approach the expected value.
But this is precisely where the real point – and the misunderstanding – begins. For stability does not mean certainty. Even as average values settle down, individual deviations remain possible, sometimes even extreme. Large numbers smooth out the surface, but they do not eliminate uncertainty.
Where Chance Gave Way to Order
This insight did not come out of nowhere. Jakob Bernoulli was born on December 27, 1654, in Basel and grew up in a family that initially envisioned a theological, rather than a mathematical, career for him. He complied with his father’s wishes and studied theology; and as early as 1671 he earned a Master of Philosophy degree; a few years later he passed his final examination. His true passion, however, lay in mathematics from an early age. Much of what he mastered in this field he taught himself – not as a pastime, but with the seriousness of a researcher who senses that numbers hold a unique form of knowledge.
Between 1676 and 1682, Bernoulli traveled through France, England, Holland, Germany, and Switzerland. These educational journeys were of crucial importance to his scientific development. During his travels, he met some of the most prominent researchers of his time, including the Irish natural philosopher Robert Boyle and the English polymath Robert Hooke. In this way, he gained direct insight into the scientific debates of the early modern era and connected with the intellectual networks that were transforming European thought. After returning to Basel, he gave private lectures on experimental physics, particularly on mechanical problems. In 1682, Bernoulli published “Conamen novi systematis cometarum” (“Attempt at a New System of Comets”), an early scientific treatise in which his ambition to not only describe natural phenomena but also to systematically explain them through mathematical reasoning and, as far as possible, to predict them, is already evident.
When the chair of mathematics at the University of Basel became vacant in 1687, Bernoulli was appointed to the professorship. He held it until his death in 1705. During those years, he worked on infinite series, problems in infinitesimal calculus, and geometric curves. He studied the natural philosophical and epistemological writings of René Descartes in depth and sought dialogue with the polymath Gottfried Wilhelm Leibniz in order to understand and further develop the new calculus. Jakob Bernoulli was, therefore, by no means a man of a single theorem. He belonged to a generation in which a new scientific way of thinking was taking hold: the world was no longer interpreted merely in qualitative or exemplary terms, but increasingly described analytically, structurally, and quantitatively.
The Art of Conjecturing
It is precisely in this context that Bernoulli’s magnum opus takes on its special significance. Research on the origins of probability theory speaks retrospectively of the difficult birth of stochastics. Bernoulli himself understood his investigations into the mathematically graspable uncertain as an art of reasoned conjecture. Even the title of his magnum opus, "Ars Conjectandi", published posthumously in 1713, can be read as “The Art of Conjecture” or “The Art of Reasoning Under Uncertainty.” What is meant here is not mere guesswork, but a method for drawing reasonable conclusions from incomplete information.
This was something new. Before Bernoulli, people had analyzed games of chance and solved individual problems in combinatorics or expected value calculations. But Bernoulli shifted the horizon. Probability was not merely meant to explain who wins when rolling dice, but also to provide guidance in civil, moral, and economic matters. That is the real turning point: The art of betting became a theory of learning from experience. The singular random event became an object of systematic knowledge.
Precisely for this reason, it makes sense to view Bernoulli not merely as an early probability theorist, but as the architect of a new scientific attitude. For him, the uncertain no longer appeared as a mere counterpart to knowledge, but as a field that could be structured under certain conditions. Here, stochastics begins as an intellectual discipline of humility: one does not know everything, but one can still judge reasonably. In this sense, Bernoulli also stands for the conviction that mathematical order is not a footnote of science, but its fundamental form.
Bernoulli’s golden rule
At the heart of this new way of thinking lies the law of large numbers. At first glance, its basic idea is simple: if a random process is repeated often enough under consistent conditions, the observed frequencies will stabilize around an expected value. If you flip a fair coin just ten times, you could easily get heads three or eight times. If you flip it ten thousand times, you will very likely approach half. Not because nature needs to balance out a short run, but because the relative frequency settles down over long runs.
This point is particularly important because the law of large numbers is still often misunderstood today. It is not a law of compensation in the sense of a correction of fate. If heads comes up unusually often in the first few tosses, the coin owes nothing to the future. Bernoulli’s theorem does not say that the world actively compensates for irregularities. It merely states that the significance of individual random fluctuations becomes relatively smaller as the number of observations increases. Convergence is not a moral restoration of balance, but a statistical limiting process.
Moreover, Bernoulli’s achievement was not merely to formulate this concept of limits mathematically. He also posed the practical question of how many observations are needed to draw reliable conclusions from experience. This is precisely where the transition from pure theory to empirical reasoning lies. Anyone seeking to assess uncertainty needs not only mathematical skill but also an awareness of sample size, precision, and the quality of the underlying observations. This is precisely why the law of large numbers remains highly relevant in risk management today: for example, in calculating the frequency of claims in large insurance portfolios, in modeling expected defaults in broadly diversified loan portfolios, in estimating warranty and complaint rates in industrial production processes, or in the statistical evaluation of frequent, small operational losses. Wherever there are many sufficiently comparable individual events, the large number of observations creates a more robust basis for forecasts – without, of course, solving the problem of rare extreme events.
Why Large Numbers Can Be Deceptive
Precisely because Bernoulli’s theorem is so powerful, it is often overused. Large numbers narrow the range of results, but they do not automatically make them more accurate. Under the right conditions, they reduce random fluctuations; however, they do not correct for biased sampling, they do not make dependent data independent, and they do not transform rare extreme events into harmless fluctuations around the mean. The theorem is powerful – but it is not a universal cure for methodological errors.
This can be demonstrated across several fields. In election research, a huge number of responses may be of little methodological value if the sample is systematically biased. In medicine, a large observational registry may suggest false causal conclusions if confounding or selection biases exist. In the financial world, on the other hand, markets notoriously violate the convenient assumptions of independent and identically distributed data. Returns exhibit volatility clusters, heavy tails, and unstable dependency patterns. In such cases, large amounts of data often generate a dangerous form of pseudo-accuracy.
One could also say: The law of large numbers provides reassurance where it should actually prompt methodological vigilance. For with every growing dataset, not only does the likelihood of statistical stabilization increase, but so does the temptation to lose sight of the structure of the dataset itself. Those who focus solely on the impressive n confuse quantity with quality. This is precisely what makes large numbers deceptive.
When Averages Mask Risks
This insight is of immediate importance for modern risk management. In many practical applications, risks are aggregated: credit portfolios are consolidated, operational losses are summarized in annual statistics, cyber incidents are bundled into key metrics, and loss histories are presented as expected values or average costs. This is necessary and often unavoidable. But aggregation comes at a price. It not only smooths the surface but sometimes obscures precisely those edges of the distribution where the truly critical risks are concentrated.
A portfolio may appear stable on average yet be severely compromised by just a few extreme failures. An IT infrastructure may exhibit a low average failure rate over many years yet suffer a catastrophic failure due to a single exceptional event. A company may see seemingly reassuring average values in its dashboards and, precisely because of this, become blind to cluster risks, dependencies, thresholds, and nonlinear cascades. Bernoulli’s theorem explains why repetition can generate insight. However, it also indirectly warns against confusing average values with certainty.
This is precisely why, in risk management, it is not enough to know only the expected value. One must consider the entire distribution: dispersion, skewness, outliers, extreme ranges, and model assumptions. Anyone who views risks solely as an average burden misses their actual operational and strategic significance.
The old statistics joke sums it up perfectly: A statistician wades through a river that is, on average, only one meter deep – and drowns. As exaggerated as the joke is, it describes the problem with precision. The mean can be reassuring, precisely because it smooths out the dangerous spots. But risks do not materialize on average; they materialize at the deep, skewed, and extreme points of the distribution. For risk management, this is a banal yet fundamental insight: those who look only at the mean confuse mathematical order with real safety.
What Bernoulli Leaves Behind for Modern Decision-Makers
Bernoulli’s legacy is therefore ambiguous – and precisely for that reason so fruitful. On the one hand, he shows that knowledge can arise from repetition. Experience need not remain stuck in the anecdotal; it can be systematically organized. On the other hand, he shows how easily this knowledge is overestimated as soon as one forgets the conditions to which it is bound. Large numbers provide orientation, but not certainty. They discipline judgment, but do not replace it.
This tension is what continues to make his thinking so compelling today. Looking at Bernoulli takes us back to that imaginary desk in Basel, where a new science of uncertainty emerged from coin tosses, mortality tables, and notebooks. What began there lives on today in statistics, actuarial science, risk modeling, scenario analysis, and data-driven decision-making. Yet the most important lesson is perhaps still the same as it was more than three centuries ago: Reliable conclusions require not merely a lot of data, but the right data, sound models, and intellectual rigor.
The conclusion is therefore as simple as it is demanding. Anyone who wants to understand risks must not be satisfied with averages. They must examine the assumptions, the structure of the data, and the extremes of reality. It is precisely there that it is determined whether systems are robust or merely appear stable.
Bibliography and Further Reading
- Bernoulli, Jakob (1682): Conamen novi systematis cometarum. Henricus Wetstein, Amsterdam 1682.
- Bernoulli, Jakob (1713): Ars Conjectandi. Published by the Thurnisius Brothers, Basel 1713.
- Bernstein, Peter L. (1996): Against the Gods: The Remarkable Story of Risk. John Wiley & Sons, New York 1996.
- Gigerenzer, Gerd / Swijtink, Zeno / Porter, Theodore / Daston, Lorraine / Beatty, John / Krüger, Lorenz (1989): The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge University Press, Cambridge 1989.
- Hacking, Ian (1975): The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction, and Statistical Inference. Cambridge University Press, Cambridge 1975.
- Romeike, Frank (2007): Jakob Bernoulli , in: RISK MANAGER, Issue 1/2007, pp. 12–13.
- Romeike, Frank (2015): Beautiful, Colourful Risk: Benoît B. Mandelbrot - Remembering the Father of Fractals, in: Union Investment Institutional [ed.]: The Measurement of Risk, Frankfurt am Main 2015, pp. 197–207.



