The Drunkard’s Walk: How Randomness Rules Our Lives 🎲
Everything we care about lies somewhere in the middle, where pattern and randomness interlace.James Gleick (2011). “Chaos: Making a New Science”
It was pretty hard to think of a quote to start this book summary. There are a ton of good examples from Fooled By Randomness but I had to settle on this one.
Most of life is a dance with randomness. We optimize for it whenever we don’t even know it. We forget how much randomness controls and determines our lives.
In this review, I’m going to go over a lot of my favorite parts of this book. For anyone who is reading this other than me, as usual my book notes are made for my own review and are a chaotic mess that usually only makes sense in my mind.
That being said, let’s get random y’all.
The Linda Problem – Specific > General 👧
Let’s start off with this one:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable?
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.
What is the correct answer? If you said A…bingo.
There are just more bank tellers than bank tellers who are feminists.
More than 80 percent of people chose option 2, regardless of whether they were novice, intermediate or expert statisticians. However, the probability of two events occurring in conjunction is always less than or equal to the probability of either one occurring alone.
If nothing about this looks strange, then Kahneman and Tversky have fooled you, for if the chance that Linda is a bank teller and is active in the feminist movement were greater than the chance that Linda is a bank teller, there would be a violation of our first law of probability, which is one of the most basic of all: The probability that two events will both occur can never be greater than the probability that each will occur individually.
Simple arithmetic: the chances that event A will occur = the chances that events A and B will occur + the chance that event A will occur and event B will not occur.
Kahneman and Tversky concluded that because the detail “Linda is active in the feminist movement” rang true based on the initial description of her character, when they added that detail to the bank-teller speculation, it increased the scenario’s credibility.
If the details we are given fit our mental picture of something, then the more details in a scenario, the more real it seems and hence the more probable we consider it to be—even though any act of adding less-than-certain details to a conjecture makes the conjecture less probable.
Basically – the more specific the details of an event are… the MORE you believe it to be true even if the added probability of 2 events occurring together is incredibly small.
The ability to evaluate meaningful connections among different phenomena in our environment may be so important that it is worth seeing a few mirages.
It’s important to remember our brains are advanced pattern detection machines that can detect agency or cause where there is none. 🧠
The Law of Compounding Probability 💻
That brings us to our next law, the rule for compounding probabilities: If two possible events, A and B, are independent, then the probability that both A and B will occur is equal to the product of their individual probabilities.
Suppose a married person has on average roughly a 1 in 50 chance of getting divorced each year.
On the other hand, a police officer has about a 1 in 5,000 chance each year of being killed on the job.
What are the chances that a married police officer will be divorced and killed in the same year?
According to the above principle, if those events were independent, the chances would be roughly 1/50 × 1/5,000, which equals 1/250,000.
Of course the events are not independent; they are linked: once you die, darn it, you can no longer get divorced. And so the chance of that much bad luck is actually a little less than 1 in 250,000.
When you want to know the chances that two independent events, A and B, will both occur, you multiply; if you want to know the chances that either of two mutually exclusive events, A or B, will occur, you add.
How About Error Rate? Looking at courts 🧑⚖️
Estimates of the error rate due to human causes vary, but many experts put it at around 1 percent.
However, since the error rate of many labs has never been measured, courts often do not allow testimony on this overall statistic. Even if courts did allow testimony regarding false positives, how would jurors assess it?
Most jurors assume that given the two types of error—the 1 in 1 billion accidental match and the 1 in 100 lab-error match—the overall error rate must be somewhere in between, say 1 in 500 million, which is still for most jurors beyond a reasonable doubt.
But employing the laws of probability, we find a much different answer.
The use of mathematics in the modern legal system suffers from problems no less serious than those that arose in Rome so many centuries ago.
Random Thought Experiment – Lottery vs Death
Here’s another crazy game.
Suppose the state of California made its citizens the following offer: Of all those who pay the dollar or two to enter, most will receive nothing, one person will receive a fortune, and one person will be put to death in a violent manner.
Would anyone enroll in that game? 🤔
People do, and with enthusiasm. It is called the state lottery. And although the state does not advertise it in the manner in which I have described it, that is the way it works in practice.
For while one lucky person wins the grand prize in each game, many millions of other contestants drive to and from their local ticket vendors to purchase their tickets, and some die in accidents along the way.
The Law of Small Numbers (Opps)
The misconception—or the mistaken intuition—that a small sample accurately reflects underlying probabilities is so widespread that Kahneman and Tversky gave it a name: the law of small numbers.
The law of small numbers is not really a law. It is a sarcastic name describing the misguided attempt to apply the law of large numbers when the numbers aren’t large.
Going against the law of small numbers requires character. For while anyone can sit back and point to the bottom line as justification, assessing instead a person’s actual knowledge and actual ability takes confidence, thought, good judgment, and, well, guts.
Most of our life experiences are like that: we observe a relatively small sample of outcomes, from which we infer information and make judgments about the qualities that produced those outcomes.
Our decisions and logic exist within the confines of bounded rationality and experience yet we believe we know.
Re-introducing Baye’s Theory
BAYES’S THEORY shows that the probability that A will occur if B occurs will generally differ from the probability that B will occur if A occurs.
👉 Note – I wrote about it here.
In legal circles the mistake of inversion is sometimes called the prosecutor’s fallacy because prosecutors often employ that type of fallacious argument to lead juries to convicting suspects on thin evidence.
Loosely defined as, “The prosecutor’s fallacy is a fallacy of statistical reasoning typically used by a prosecutor to exaggerate the likelihood of a criminal defendant’s guilt. The fallacy can be used to support other claims as well – including the innocence of a defendant.”
Numbers definitely aren’t intuitive and our understanding of probability affects thinking even in the most “objective” arena we have.. Law.
Is Objectivity Even Real? Or is it just a game?
A group of researchers at Clarion University of Pennsylvania collected 120 term papers and treated them with a degree of scrutiny you can be certain your own child’s work will never receive: each term paper was scored independently by eight faculty members.
The resulting grades, on a scale from A to F, sometimes varied by two or more grades. On average they differed by nearly one grade.
Since a student’s future often depends on such judgments, the imprecision is unfortunate.
Think Wittgenstein’s Ruler. If you use a ruler to measure a table, you’re also using the table to measure the ruler.
How About Wine? 🍷
Then, in 1978, an event often credited with the rapid growth of that industry occurred: a lawyer turned self-proclaimed wine critic, Robert M. Parker Jr., decided that, in addition to his reviews, he would rate wines numerically on a 100-point scale.
Over the years most other wine publications followed suit. Today annual wine sales in the United States exceed $20 billion, and millions of wine aficionados won’t lay their money on the counter without first looking to a wine’s rating to support their choice.
Yet the rating system thrives. Why?
The critics found that when they attempted to encapsulate wine quality with a system of stars or simple verbal descriptors such as good, bad, and maybe ugly, their opinions were unconvincing. But when they used numbers, shoppers worshipped their pronouncements. Numerical ratings, though dubious, make buyers confident that they can pick the golden needle
Think crypto influencers? 2nd order chaotic systems = ppl searching for a predictive messiah to highlight signal from pure noise.
Self fulfilling prophecy.
Measurement Problems In Probability 📏
Measurement of quality that can be summarized by a number, a theory of measurement must address two key issues:
- How do we determine that number from a series of varying measurements?
- And given a limited set of measurements, how can we assess the probability that our determination is correct?
We now turn to these questions, for whether the source of data is objective or subjective, their answers are the goal of the theory of measurement.
THE KEY to understanding measurement is understanding the nature of the variation in data caused by random error.
Suppose we offer a number of wines to fifteen critics or we offer the wines to one critic repeatedly on different days or we do both. We can neatly summarize the opinions employing the average, or mean, of the ratings.
But it is not just the mean that matters: if all fifteen critics agree that the wine is a 90, that sends one message; if the critics produce the ratings 80, 81, 82, 87, 89, 89, 90, 90, 90, 91, 91, 94, 97, 99, and 100, that sends another.
Both sets of data have the same mean, but they differ in the amount they vary from that mean.
Since the manner in which data points are distributed is such an important piece of information, mathematicians created a numerical measure of variation to describe it.
That number is called the sample standard deviation. Mathematicians also measure the variation by its square, which is called the sample variance.
You can think of sample variance as a measurement of how far a set of numbers are spread out from their average value.
⭐ This margin of error or variation is important. ⭐
For many polls a margin of error of more than 5 percent is considered unacceptable, yet in our everyday lives we make judgments based on far fewer data points than that. People
Why Observing Success Isn’t Always Reliable 🔎
When we observe a success or a failure, we are observing one data point, a sample from under the bell curve that represents the potentialities that previously existed.
We cannot know whether our single observation represents the mean or an outlier, an event to bet on or a rare happening that is not likely to be reproduced.
But at a minimum we ought to be aware that a sample point is just a sample point, and rather than accepting it simply as reality, we ought to see it in the context of the standard deviation or the spread of possibilities that produced it.
👉 Important – > our limited sample set gives us the seal and feel of authenticity, yet it can be terribly misleading.
Introducing CLT or Central Limit Theorem
Today the central limit theorem and the law of large numbers are the two most famous results of the theory of randomness.
Explained succinctly – CLT says that given a larger enough set of samples the averages tend to approximate a normal distribution
By the 1830s most scientists had come to believe that every measurement is a composite, subject to a great number of sources of deviation and hence to the error law.
The error law and the central limit theorem thus allowed for a new and deeper understanding of data and their relation to physical reality.
In the ensuing century, scholars interested in human society also grasped these ideas and found to their surprise that the variation in human characteristics and behavior often displays the same pattern as the error in measurement. And so they sought to extend the application of the error law from physical science to a new science of human affairs.
Individual life spans—and lives—are unpredictable, but when data are collected from groups and analyzed en masse, regular patterns emerge.
👉 Life follows predictable patterns of scale and distribution.
In fact, a statistical ensemble of people acting randomly often displays behavior as consistent and predictable as a group of people pursuing conscious goals.
In other words… we as humans individually are unpredictable but as groups are highly predictable.
We associate randomness with disorder. 🎲
As nineteenth-century scientists dug into newly available social data, wherever they looked, the chaos of life seemed to produce quantifiable and predictable patterns. But it was not just the regularities that astonished them. It was also the nature of the variation. Social data, they discovered, often follow the normal distribution.
This then became the basis for forensic computation.
Quételet had stumbled on a useful discovery: the patterns of randomness are so reliable that in certain social data their violation can be taken as evidence of wrongdoing.
To measure how well bettors assess two teams, economists use a number called the forecast error, which is the difference between the favored team’s margin of victory and the point spread determined by the marketplace. It may come as no surprise that forecast error, being a type of error, is distributed according to the normal distribution.
Reality is CounterIntuitive 😔
A vivid example of such a change in social equilibrium occurred in the months after the attacks of September 11, 2001, when travelers, afraid to take airplanes, suddenly switched to cars.
Their fear translated into about 1,000 more highway fatalities in that period than in the same period the year before—hidden casualties of the September 11 attack.
Importance of Mean Regression 🧮
Early scientists like Galton soon realized that processes that did not exhibit regression toward the mean would eventually go out of control. For
Without this individual outlier traits or problems would extend into infinity created maladaptive systems. Systems tend to regulate themselves by regressing to the mean. This is a fascinating property of systems as a whole. You can think of it as a self regulatory function that exists inherently within nature.
Let’s do a thought experiment…
Imagine that the tallest humans would be ever taller. People would continuously grow and grow until we were all 10 foot tall giants.
Because of regression toward the mean, that does not happen. Regardless of genetic expression as a whole specific traits tend to regress towards a mean.
The same can be said of innate intelligence, artistic talent, or the ability to hit a golf ball.
Questioning the Validity of Experience
Once again we come to the sample set problem. Individual experience gives you the seal and feel of authenticity.
👉 Much of this is tied to our need for agency and control in our environment.
Our desire to control events is not without purpose, for a sense of personal control is integral to our self-concept and sense of self-esteem.
Even if some of these shared ideas and notions are not exactly accurate. It’s important to remember that false memetic and narratives have a strong selective pressure for proliferation.
When we look closely, we find that many of the assumptions of modern society are based, as table moving is, on shared illusions.
Survival in Nazi concentration camps “depended on one’s ability to arrange to preserve some areas of independent action, to keep control of some important aspects of one’s life despite an environment that seemed overwhelming.”33
Nursing Home Experiment 👵
Collaborator studied the effect of the feeling of control on elderly nursing home patients.
Disturbingly, eighteen months later a follow-up study shocked researchers: the group that was not given control experienced a death rate of 30 percent, whereas the group that was given control experienced a death rate
Why is the human need to be in control relevant to a discussion of random patterns?
Because if events are random, we are not in control, and if we are in control of events, they are not random.
There is therefore a fundamental clash between our need to feel we are in control and our ability to recognize randomness.
That clash is one of the principal reasons we misinterpret random events. In fact, inducing people to mistake luck for skill, or pointless actions for control, is one of the easiest enterprises a research psychologist can engage
Let this sink in for a minute as it’s probably one of the most important parts of this book.
👉 We want to feel control in an otherwise mostly random world and so interpret the noise as such – patterns of linear relationships that can be understood with our existing mental models.
Putting This Into an Experiment
Ask people to control flashing lights by pressing a dummy button, and they will believe they are succeeding even though the lights are flashing at random.
Show people a circle of lights that flash at random and tell them that by concentrating they can cause the flashing to move in a clockwise direction, and they will astonish themselves with their ability to make it happen.
One manifestation of that illusion occurs when an organization experiences a period of improvement or failure and then readily attributes it not to the myriad of circumstances constituting the state of the organization as a whole and to luck but to the person at the top.
What an amazing CEO, he was responsible for everything. The same works in reverse too…
Shareholders’ demands that they respond to rough periods by changing management.
Studies found that in the three years after the firing there was no improvement, on average, in operating performance (a measure of earnings). No matter what the differences in ability among the CEOs, they were swamped by the effect of the uncontrollable elements of the system,
We Want to Be RIGHT Above All Else 🏆
When we are in the grasp of an illusion—or, for that matter, whenever we have a new idea—instead of searching for ways to prove our ideas wrong, we usually attempt to prove them correct.
To make matters worse, not only do we preferentially seek evidence to confirm our preconceived notions, but we also interpret ambiguous evidence in favor of our ideas.
And we unreasonably believe that the mistakes of the past must be consequences of ignorance or incompetence and could have been remedied by further study and improved insight.
On an emotional level many people resist the idea that random influences are important even if, on an intellectual level, they understand that they are.
👉 Lerner concluded that “for the sake of their own sanity,” people overestimate the degree to which ability can be inferred from success.
We are inclined, that is, to see movie stars as more talented than aspiring movie stars and to think that the richest people in the world must also be the smartest.
We have a skewed understanding of the world that seeks to align our shared mental models and make reality fit into a nice simply described narrative friendly box.
We miss the effects of randomness in life because when we assess the world, we tend to see what we expect to see. We in effect define degree of talent by degree of success and then reinforce our feelings of causality by noting the correlation.
Let’s finish this off by one of my favorite books – Fooled By Randomness…
“We favor the visible, the embedded, the personal, the narrated, and the tangible; we scorn the abstract.”