In the summer of 1654, a question posed by a gambler gave rise to a management question of remarkable enduring relevance: How does one evaluate a situation whose outcome is still uncertain, but on which a decision must be made now? The correspondence between Pascal and Fermat on the problem of games gave rise to the fundamental concepts of expected value, future paths, and fair valuation under uncertainty. It is precisely for these reasons that this episode does not belong in the cabinet of curiosities of the history of mathematics, but rather aids in understanding quantitative risk management: in expected losses under IFRS 9, in the best estimate under Solvency II, in governance decisions regarding fair distribution, and in the disciplined handling of model limitations.
A Letter from the Summer of 1654
The scene is well documented historically. The French mathematician and physicist Blaise Pascal wrote to the French mathematician and jurist Pierre de Fermat in late July 1654, stating that he could not resist replying, "even though I am still in bed." The letter, he reported, had reached him the previous evening; the subject is neither a national crisis nor a theological dispute, but a question from the world of gambling. This opening alone reveals why the story remains so relevant today: Mathematics appears here not as an academic discipline, but as a response to an urgent, seemingly everyday decision made under uncertainty.
Somewhere between Toulouse and Paris – to borrow Pascal's own phrase, that the truth is the same in both cities—an interrupted game gives rise to a theory of rational decision-making. One thing is certain: a game-loving nobleman, Antoine Gombaud, known as Chevalier de Méré, posed questions from the world of betting to Pascal; it is also certain that authoritative historical overviews treat the 1654 correspondence as the founding moment of modern probability theory. Only some of the embellishments of the later legend-building are uncertain – and that is exactly how they should be treated.
For risk managers, the point of the episode is obvious. The real question is not: Who is currently in the lead? But rather: What is the fair value of a position whose future is uncertain, when the decision must be made not at the end, but in the middle of the game? Anyone who assesses credit risks, estimates insurance cash flows, determines the size of loss reserves, or decides in a crisis meeting on continuation, termination, or division is, at its core, still grappling with precisely this problem.
Two Men, Two Temperaments
Blaise Pascal was a child prodigy with a remarkably practical streak. Born in Clermont-Ferrand in 1623, he was educated by his father, drew attention to himself early on with his geometry, and later developed the Pascaline, one of Europe's first mechanical calculating machines. At the same time, standard biographies describe him as physically frail, often ill, and mentally possessed of an almost feverish intensity. This fits his style in solving problems: fast, precise, methodical, and averse to any ambiguity.
Fermat, by comparison, seems almost like the personification of serenity. He was a lawyer, a parliamentary councilor, and a judge in Toulouse – in other words, not a professional mathematician in the modern sense, but a scholar by vocation – and precisely for that reason a particularly unsettling benchmark for others. Anyone who administers justice by day and lays the foundations of probability by night has a very idiosyncratic concept of leisure. At the same time, one should not reduce him to his correspondence with Pascal: his later geometric work "De linearum curvarum cum lineis rectis comparatione" appeared in 1660 and was his only mathematical work published during his lifetime [Fermat 1660].
Key Insights and Discoveries
The first key insight from Pascal and Fermat is this: An open position is not valued based on what has already happened, but on the remaining possibilities for the future. This may sound obvious, but in organizations, it is by no means a given even today. In discussions about credit default risks, project risks, or the assessment of cyber risks, the focus often remains fixed on the past: Who has already invested how much, who has "actually" earned their keep, who is perceived to be in the lead? How often have certain cyber risks already occurred in the past? The Paris and Toulouse response is different: All of this may be psychologically understandable, but it is not the correct valuation logic. What is evaluated is what remains open.
The second insight is that fairness under uncertainty should not be viewed romantically, but can be constructed rationally. Fermat demonstrates this through symmetric cases; Pascal demonstrates it through expectation and recursion. This has striking relevance for the present day. Solvency II defines the best estimate as the expected present value of future cash flows, calculated as a probability-weighted average across all relevant scenarios, taking into account the time value of money. IFRS 9 requires an unbiased, probability-weighted expected value derived from a range of possible outcomes, taking into account multiple scenarios. In other words: Modern regulation, in the tone of the 21st century, often prescribes the same rationale that was first elegantly formulated in 1654.
The third insight is the most mature and perhaps the most important for risk management: Every outcome depends on its assumptions. The game theory model works because the rules are clear, the odds are equal in each round, and the possible future paths can be clearly defined. In the real world, data is often incomplete or missing entirely, dependencies are hidden, and the extent of damage notoriously follows its own – sometimes very painful – logic. This is precisely why early probability theory also became an early lesson in model humility. Even today, models are simplified representations of real-world relationships, whose limitations must be understood and challenged.
The Problem of the Matches
The classic version of the problem is simple, yet surprisingly profound: Two players each stake 32 gold coins (known as "pistols"). This means there are a total of 64 coins in the pot. The players compete in a series of matches – the first to win three takes the entire stake. However, the game is stopped when the score is 2:1. One of the players is leading, but has not yet won definitively. The crucial question now is: How should the stake be divided fairly?
Intuitively, one might say: The leading player gets the larger share, the other player gets correspondingly less. But this rough intuition is not enough for Pascal and Fermat. They are not looking for a "fair" solution in the sense of feeling or generosity, but rather a precisely calculated value – an objective price for an unfinished game situation.
Fermat's solution is a precise calculation of the remaining possible future scenarios. The leading player, A, needs one more win; the trailing player, B, needs two. Therefore, at most two more rounds are needed to decide the outcome. Assuming equal odds per round, there are four equally plausible sequences: AA, AB, BA, and BB. Only in the last case does B win the series; in the other three, A wins. Thus, the pot stands at a fair ratio of 3:1, or 48 to 16. This was revolutionary not because the calculation was so difficult, but because here, for the first time, the present is systematically interpreted as a bundle of possible futures.
| Path to the Future | Round 1 | Round 2 | Series Winner | Result |
|---|---|---|---|---|
| AA | A | A | A wins | |
| AB | A | B | A | A wins |
| BA | B | A | A | A wins |
| BB | B | B | B | B wins |
Table 01a: Fermat's path counting: The future as a set of possible paths
| Player | Winning Combinations | Percentage | Payout |
|---|---|---|---|
| A | 3 | 3/4 | 48 |
| B | 1 | 1/4 | 16 |
Table 01b: Fermat's path counting: The future as a set of possible paths
Pascal arrives at the same result, but using a method that sounds even more like modern risk analysis. He separates a certain portion from an uncertain one. If A loses the next round, the score is 2–2; in this scenario, both would be entitled to 32 each. A can therefore consider these 32 to be guaranteed. Only the other half of the pot remains in dispute. Since the next round is fair, A is entitled to half of the expected value of this uncertain remainder, i.e., 16. Together, that makes 48. Fermat counts; Pascal evaluates. One thinks in paths, the other in stages. Both invent, each in their own way, the grammar of rational decisions under uncertainty.
It is not the current score that is the fair price of an open position, but the value of its still-possible futures.
| Step | Amount | Explanation |
|---|---|---|
| Certain share for A | 32 | At 2:2, each would have 32 |
| Uncertain remainder | 32 | Not yet decided |
| Expected value of uncertain part | 16 | 50% chance |
| Total for A | 48 | 32 + 16 |
| Total for B | 16 | Rest |
Table 02: Pascal's Expected Value Logic: Valuation of States and Residual Uncertainty
Intuitively, the beauty of the whole lies in the fact that two intellectual tools interlock here, tools that continue to shape risk management practice to this day: combinatorics and expected value, scenario paths and recursive valuation, possible worlds and current prices. Anyone who builds decision trees today, weights credit losses across scenarios, or views a stress test as an evaluation tool is intellectually closer to Pascal and Fermat than they realize.
From Game to Science
The episode of the summer of 1654 did not remain a private matter. In authoritative overviews, the correspondence is regarded as the starting point of modern probability theory. As early as 1657, the Dutch astronomer, mathematician, and physicist Christiaan Huygens made this new way of thinking suitable for publication and teaching in his work "De ratiociniis in ludo aleae" Huygens claimed that he was unaware of the contents of the correspondence between Fermat and Pascal. However, an analysis of the solutions to the five problems listed at the end of his treatise suggests that he was familiar with Pascal's ideas, but not with Fermat's combinatorial methods. Later authors, such as the Swiss mathematician Jakob Bernoulli and the French mathematician Abraham de Moivre, built upon this work. Although Pascal's "Traité du triangle arithmétique" was not published until 1665, according to current editions, its core content had already been developed by 1654. Thus, within a few years, a question posed by a gambler evolved into a method, a treatise, and ultimately a discipline.
Lessons for Risk Management
The first lesson is: expected value trumps volume. For example, IFRS 9 requires an unbiased, probability-weighted expected value for expected credit losses (ECL), derived from a range of possible outcomes that incorporates reasonable and supportable information about past events, current conditions, and future economic developments. That sounds dry, but conceptually it is almost Pascalian. It is not the loudest default history that wins, but the cleanly weighted sum of open futures.
The second lesson is: Fair distribution is governance. The problem of the parties is the archetype of every decision that must be made amid uncertainty and with incomplete information: a negotiation with a borrower, a goodwill decision in the event of a claim, an internal loss allocation following a cyber incident, or a project cancellation under time pressure.
The Basel Committee on Banking Supervision (BCBS) framework for operational resilience therefore does not call for descriptive tools such as heat maps, but above all for robust decision-making foundations: the identification of critical business processes, the systematic analysis of internal and external dependencies, the use of "severe but plausible scenarios," and the establishment of clear tolerance limits for disruptions ("tolerance for Disruption")
At its core, this is about the ability to make consistent and transparent decisions even when faced with interruptions. This is precisely where the structural similarity to game theory lies: when processes are interrupted, information is incomplete, and time is of the essence, explicit evaluation rules are needed to assess the remaining options fairly and rationally.
The third lesson is the most modern: models are useful because they simplify – and risky because they simplify. The scientific literature has long emphasized that models necessarily abstract and thus always provide only a reduced representation of reality. As George Box succinctly put it: "All models are wrong; the practical question is how wrong do they have to be to not be useful."
This is precisely the core of model risk: it arises from simplifying assumptions, uncertain or distorted input data, structural model errors, or the inappropriate application of models. Consistent risk management therefore requires not only the use of models, but also their critical scrutiny, validation, and continuous refinement.
This is the flip side of the elegant breakthrough of 1654: no matter how convincing a method may be, it remains bound by its underlying assumptions. Those who quantify uncertainty must not ignore the uncertainty inherent in the model itself.
A Case Study: Why Gut Feelings Aren't Enough
Let's consider a deliberately simple and fictional example from the business world. A medium-sized bank handles its payment transactions via a central platform. The question is: Is a second, geographically redundant operating environment worth the investment?
Without redundancy, the bank estimates the annual probability of a severe outage at around 6 percent. The combined cost of business interruption, post-incident processing, contractual penalties, and customer friction would amount to 20 million euros in the event of an incident. The expected annual loss is thus 0.06 × 20 million = 1.2 million euros. The figures are illustrative; the underlying logic, however, is not: It corresponds to the expected value logic, which assesses risks as a combination of probability of occurrence and loss amount.
With redundancy, the probability of failure drops to 1.5 percent; the residual loss would be only 6 million euros. However, the additional infrastructure incurs annual costs of 700,000 euros. This results in an expected total annual cost of 0.015 × 6 million = 90,000 euros in residual loss plus 700,000 euros in mitigation costs, for a total of 790,000 euros. Assuming a risk-neutral valuation, the redundancy is thus economically advantageous. One compares certain costs with probability-weighted future losses – or, to put it another way: one replaces intuition with structured valuation.
However, the decision only becomes robust after sensitivity and model testing. What if the 6 percent figure – excluding redundancy – is overestimated and actually amounts to only 3 percent? In that case, the expected loss without mitigation measures would be 600,000 euros – and the decision could be overturned. Conversely, the question arises as to whether the model includes all relevant loss components: such as reputational losses, dependencies on third parties, or regulatory follow-up costs. If such effects are underestimated or not taken into account, the benefits of redundancy may be reversed.
The decision therefore depends not only on a point-by-point calculation, but also on the quality of the assumptions, the coverage of extreme scenarios, and the stability of the results in the face of model uncertainty.
Conclusion and Outlook
The roll of the dice that changed the world was not historic simply because people were gambling. Gambling existed long before Pascal and Fermat. What made it historic was that they found a new answer to an age-old question: What is fair, reasonable, and decidable when the future has not yet unfolded? Their answer consists of four steps that remain valid to this day. One identifies possible outcomes. One weights these outcomes. One translates them into a present-value. And one never forgets that the result depends on assumptions.
This is precisely why many quantitative methods in risk management are so deeply rooted in 17th-century ideas. This also changes our view of the history of mathematics. Then Pascal is not just the man of the triangle, and Fermat is not just the genius of the margin note. Instead, both become early architects of a way of thinking without which neither credit management nor provisioning, neither crisis governance nor model validation would be seriously conceivable. In a world that likes to moralize about uncertainty, dramatize it, or decorate it with dashboards, their lesson remains refreshingly old-fashioned: First sort out the open possibilities, then judge.
Bibliography and Further Reading
- Box, George E. P. (1976): Science and Statistics. In: Journal of the American Statistical Association, Vol. 71, No. 356, pp. 791–799.
- Pascal, Blaise / Fermat, Pierre de (1679/1654): Correspondance sur les partis et questions de hasard. In: Varia Opera Mathematica Petri de Fermat. Toulouse 1679.
- Pascal, Blaise (1665): Traité du triangle arithmétique, avec quelques autres petits traitez sur la même matière. Guillaume Desprez, Paris 1665.
- Huygens, Christiaan (1657): De ratiociniis in ludo aleae. Ex officina J. Elsevirii, Leiden 1657.
- M.P.E.A.S. (= Pierre de Fermat) (1660): Geometric Treatise on the Comparison of Curved Lines with Straight Lines. Arnaud Colomiez, Toulouse 1660.
- Romeike, Frank (2007): Pierre de Fermat, in: RISK MANAGER, Issue 19/2007, pp. 22–24.


