How do you know?

I’m reading Daniel Kahneman’s book, Thinking Fast and Slow. If you’ve read other books on behavioral economics and decision making — such as Fooled by Randomness, Predictably Irrational, or Anti-Fragile — this book will be an interesting expansion of the ideas presented in those books. But let me tell you, chapter 21 is where this book is at.

In chapter 21, Kahneman gives some great applications for decision making heuristics, or approximate, algorithmic tools that help a person make a decision under most circumstances. He explains how one researcher, Orley Ashenfelter, developed an algorithm to judge whether a particular vintage in Bordeaux, France will be valuable to collectors using only three variables: the amount of rain fall the preceding winter, the temperature and rainfall in the summer growing season, and the chateau producing the wine. Kahneman claims that this algorithm explains 90% of the value of a particular vintage of Bordeaux, and Ashenfelter says the weather explains 80% of Bordeaux’s quality (as measured by price at auction) and chateau explains 20%. Kahneman goes on to explain how simple algorithms often do a better job of predicting complicated situations than complex statistical models or human experts do: broad stock market returns, price performance of individual stocks, the success of a proposed scientific research program, political situations, hiring a new employee. I’m thrilled to know that there are tools we can use to make better decisions in areas that typically baffle people. I find it odd that most people ignore these tools and continue making unnecessary errors..

Kahneman does note that people can predict some areas of human experience, but these areas are predictable and controlled: things like house fires, chess games, and other situations that change in well-documented ways can be understood and predicted by human experts. Taleb, in Anti-Fragile, explains the difference between the predictable and unpredictable situations that people encounter using a metaphor of quadrants.

This image shows that situations with complex pay-offs and unknown statistical distributions, such as stock market price performance and political events, are unpredictable and changes in outcome can be drastic. However, chess games and house fires are more predictable because their behavior is less volatile: their changes are less extreme because we can better understand those events.

It is particularly pertinent to philosophy that statistics play a key role in understanding how people know about the world, and most theories of knowledge (i.e. epistemological theories) ignore the importance of statistics in our knowledge. For example, it is rare for anyone to know something with 100% certainty: even the force of gravity on Earth fluctuates in its strength over its surface, although most high school graduates will tell you without hesitation that the rate of gravitational acceleration on Earth is 9.8 m/s squared. However, the mathematical constant of gravity is good enough for nearly all people living on Earth. Most of us will never need to know that the force of gravity is weaker on top of Mount Everest and in Kuala Lumpur, or stronger in Oslo, Norway and Mexico City, Mexico. Still, the fact is that we often don’t know what we think we know: in other words, we are often less than 100% certain of many facts that we would say we know for certain. However, as Taleb’s diagram shows, this uncertainty is trivial in most “quadrants” of our lives. The “fourth quadrant” is the domain where that uncertainty can come back to bite us.

The implications of this over-confidence in our knowledge is important. It’s well-documented that most finance experts aren’t as good at picking stocks as they say they are, and that most political pundits don’t have a clue about where the next political crisis will next erupt. Kahneman covers this in his book, and you can find other authors documenting the same information. However, we need to get a handle on how much to trust what someone is telling us. How do we do this? How do we know what we know?

Philosophers talk about knowledge in terms of “justified true belief”. This definition of knowledge requires that a belief must be justified and valid. The concept of truth is a logical value, which provides a rational support for holding a belief. Justification helps explain why we ought to hold a belief by showing how the belief applies to the empirical world. In other words, truth is an abstract value of knowledge and justification ties that abstract value to some support in the empirical world. It seems to me that the demonstrating the validity of a belief is relatively simple compared to its justification. Moreover, validity can be a trivial value: it’s possible to show how many things that don’t exist are valid. For example, this is a valid, but empirically false, useless, and meaningless syllogism: “All unicorns poop rainbows. I am a unicorn. Therefore, I poop rainbows”. Proving that a belief is valid is useless if that belief doesn’t have some application to the empirical world. Consequently, most debates circle around justifications for a particular belief rather than its validity.

Some might say that the philosophical (or possibly religious) concept of Truth applies to justification, because a true and valid argument must apply to the world we inhabit. However, truth is a difficult concept to apply to justification because so much of our previous knowledge has been replaced with more accurate versions, as we found in our gravity example. Consequently, it seems cleaner and easier to talk about justification in terms of testing whether a belief applies to the empirical world. The methods of testing that are beyond the scope of this post, but I may cover it in another post.

Statistics come into play in justifying one’s knowledge. Sometimes those statistics are trivial: how likely is it you’ll need to eat breakfast tomorrow morning? And other times, those statistics are more critical: who likely is it you’ll have enough money saved and activities planned to make life worth living if you retire tomorrow morning? Unlike Frege’s logical calculus or parsing syllogisms, showing that a belief is justified is difficult. It requires a demonstration that the belief is well supported by empirical observations, but this will rarely be a deduction. More likely, it will be an inference. Political platforms, investment ideas, and religious ideologies live in this space, and much energy has been spent attempting to justify these kinds of beliefs.

The point of all this prattle is that it is useful to consider the situation in which we find ourselves and consider whether we’re thinking about the situation in the correct way. Is this situation one where being approximately correct is good enough, or if I’m wrong, will there be dire consequences? Also, it’s useful to know how you know something you believe: can this belief be deduced as we do in math and logic, or is this belief something that requires further justification, as we do in engineering, when applying math and logic to the empirical world, or when we discuss “messier” beliefs like those in the humanities.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s