Monthly Archives: August 2016

The Problem of Truth

The website,, recently published an interesting article that helps explain the problem of truth: Statistical analysis makes proving an empirical hypothesis difficult. The problem here is that truth is a binary value: something is either true or false. We can’t deal in half-truths. Yet, half-truths compose most of our world. Often, it’s the best we can do in answering difficult questions. This is why we rely on statistics to describe so much of our experience: it is a type of analysis that deals with questions that don’t fit well into a binary, true-false, yes-no, type of answer.

Statistics often provide scientists a result, which is often publishable, but that’s not always the same thing as an answer to the question (1). A data set may provide many publishable results that offer conflicting or even contradictory answers to a scientist’s question (1). N. N. Taleb makes a similar point in his analysis of decision making under complexity: we have considerable difficulty predicting what the correct answer is, and often, we can’t know what the correct answer is, due to limitations in our ability to predict the results of a given event. Statistical distributions for some events are highly unpredictable, which requires extensive observations before any meaningful or useful conclusions can be drawn.

Moreover, subjectivity influences a researcher’s results, which makes Truth a difficult commodity to produce. Daniel Kahneman’s book, _Thinking Fast and Slow_, helps explain how the human mind interferes with our supposedly pure perception of the world, which makes our determination of truth difficult, if not impossible. A better measure of scientific knowledge might be utility: what can we do with this information. Utility is not a binary value that we assign to knowledge, unlike truth. Moreover, utility can be empirically tested, like truth or validity.

In the history of science, most results turn out to be false, at least in part if not completely. John Ioannidis supports this idea, stating that most published findings are false (1). “The important lesson here is that a single analysis is not sufficeint to find a definitive answer. Every result is a temporary truth, one that’s subject to change when someone else comes along to build, test and analyze anew” (1).

We ought to focus our efforts on developing the utility of belief systems that we use in our lives, rather than bickering about truth. This fosters a more honest reporting of what science does on a daily basis, removing the faith that some folks hang on the sciences’ methods. Bad repoting of scientific method(s) gives society a false sense of accomplishment and power regarding the predictive power of these methods. “The scientific method is the most rigorous path to knowledge, but it’s also messy and tough” (1).




Much of our experience of the world is shaped by ignorance. Whether it is a driver yelling at a cyclist because one of them doesn’t know the traffic laws, or an 18th century doctor bleeding a patient because he doesn’t understand the causes of illness, ignorance frequently causes harm — even if the ignorant are trying to help.

I suppose, in part, this is why we have the phrase, “Ignorance is bliss.” The ignorant don’t know whether they are hurting or helping, and in their view, they are doing the right thing. How often is this the case? How often do we do the wrong thing when thinking that we’re acting on our correct knowledge?

I’m afraid this happens more often than not, but the silver lining is that most situations don’t have extreme consequences for our ignorant actions. A deli worker who misreads your order and makes you a turkey sandwich instead of a ham sandwich isn’t causing great problems for anyone, and this is the kind of scenario that fills most of our lives. Rarely are we in an operating room where we have to make an uncertain decision about how to save a patient’s life. We’ve built long, arduous training programs in an attempt to put the best-trained people in those situations that can have dire consequences if we act ignorantly. These training programs don’t always work, but they help ameliorate some of the damage we can cause due to ignorance.

An extreme reaction to our own ignorance is a type of paralysis. We become afraid to do anything because, if we really dig into it, we aren’t certain about very many things. We don’t help people because we’re uncertain about whether they want help; we don’t communicate with others because we’re uncertain of the outcome. However, this conclusion is as faulty as the assumption that we’re better off remaining ignorant and simply assuming that we’re acting from knowledge.

It seems that the best effort we can make is to try to act on our best knowledge of any situation, while recognizing that we’ll probably make a bunch of mistakes along the way — until we invent a crystal ball, that is.

How do you know?

I’m reading Daniel Kahneman’s book, Thinking Fast and Slow. If you’ve read other books on behavioral economics and decision making — such as Fooled by Randomness, Predictably Irrational, or Anti-Fragile — this book will be an interesting expansion of the ideas presented in those books. But let me tell you, chapter 21 is where this book is at.

In chapter 21, Kahneman gives some great applications for decision making heuristics, or approximate, algorithmic tools that help a person make a decision under most circumstances. He explains how one researcher, Orley Ashenfelter, developed an algorithm to judge whether a particular vintage in Bordeaux, France will be valuable to collectors using only three variables: the amount of rain fall the preceding winter, the temperature and rainfall in the summer growing season, and the chateau producing the wine. Kahneman claims that this algorithm explains 90% of the value of a particular vintage of Bordeaux, and Ashenfelter says the weather explains 80% of Bordeaux’s quality (as measured by price at auction) and chateau explains 20%. Kahneman goes on to explain how simple algorithms often do a better job of predicting complicated situations than complex statistical models or human experts do: broad stock market returns, price performance of individual stocks, the success of a proposed scientific research program, political situations, hiring a new employee. I’m thrilled to know that there are tools we can use to make better decisions in areas that typically baffle people. I find it odd that most people ignore these tools and continue making unnecessary errors..

Kahneman does note that people can predict some areas of human experience, but these areas are predictable and controlled: things like house fires, chess games, and other situations that change in well-documented ways can be understood and predicted by human experts. Taleb, in Anti-Fragile, explains the difference between the predictable and unpredictable situations that people encounter using a metaphor of quadrants.

This image shows that situations with complex pay-offs and unknown statistical distributions, such as stock market price performance and political events, are unpredictable and changes in outcome can be drastic. However, chess games and house fires are more predictable because their behavior is less volatile: their changes are less extreme because we can better understand those events.

It is particularly pertinent to philosophy that statistics play a key role in understanding how people know about the world, and most theories of knowledge (i.e. epistemological theories) ignore the importance of statistics in our knowledge. For example, it is rare for anyone to know something with 100% certainty: even the force of gravity on Earth fluctuates in its strength over its surface, although most high school graduates will tell you without hesitation that the rate of gravitational acceleration on Earth is 9.8 m/s squared. However, the mathematical constant of gravity is good enough for nearly all people living on Earth. Most of us will never need to know that the force of gravity is weaker on top of Mount Everest and in Kuala Lumpur, or stronger in Oslo, Norway and Mexico City, Mexico. Still, the fact is that we often don’t know what we think we know: in other words, we are often less than 100% certain of many facts that we would say we know for certain. However, as Taleb’s diagram shows, this uncertainty is trivial in most “quadrants” of our lives. The “fourth quadrant” is the domain where that uncertainty can come back to bite us.

The implications of this over-confidence in our knowledge is important. It’s well-documented that most finance experts aren’t as good at picking stocks as they say they are, and that most political pundits don’t have a clue about where the next political crisis will next erupt. Kahneman covers this in his book, and you can find other authors documenting the same information. However, we need to get a handle on how much to trust what someone is telling us. How do we do this? How do we know what we know?

Philosophers talk about knowledge in terms of “justified true belief”. This definition of knowledge requires that a belief must be justified and valid. The concept of truth is a logical value, which provides a rational support for holding a belief. Justification helps explain why we ought to hold a belief by showing how the belief applies to the empirical world. In other words, truth is an abstract value of knowledge and justification ties that abstract value to some support in the empirical world. It seems to me that the demonstrating the validity of a belief is relatively simple compared to its justification. Moreover, validity can be a trivial value: it’s possible to show how many things that don’t exist are valid. For example, this is a valid, but empirically false, useless, and meaningless syllogism: “All unicorns poop rainbows. I am a unicorn. Therefore, I poop rainbows”. Proving that a belief is valid is useless if that belief doesn’t have some application to the empirical world. Consequently, most debates circle around justifications for a particular belief rather than its validity.

Some might say that the philosophical (or possibly religious) concept of Truth applies to justification, because a true and valid argument must apply to the world we inhabit. However, truth is a difficult concept to apply to justification because so much of our previous knowledge has been replaced with more accurate versions, as we found in our gravity example. Consequently, it seems cleaner and easier to talk about justification in terms of testing whether a belief applies to the empirical world. The methods of testing that are beyond the scope of this post, but I may cover it in another post.

Statistics come into play in justifying one’s knowledge. Sometimes those statistics are trivial: how likely is it you’ll need to eat breakfast tomorrow morning? And other times, those statistics are more critical: who likely is it you’ll have enough money saved and activities planned to make life worth living if you retire tomorrow morning? Unlike Frege’s logical calculus or parsing syllogisms, showing that a belief is justified is difficult. It requires a demonstration that the belief is well supported by empirical observations, but this will rarely be a deduction. More likely, it will be an inference. Political platforms, investment ideas, and religious ideologies live in this space, and much energy has been spent attempting to justify these kinds of beliefs.

The point of all this prattle is that it is useful to consider the situation in which we find ourselves and consider whether we’re thinking about the situation in the correct way. Is this situation one where being approximately correct is good enough, or if I’m wrong, will there be dire consequences? Also, it’s useful to know how you know something you believe: can this belief be deduced as we do in math and logic, or is this belief something that requires further justification, as we do in engineering, when applying math and logic to the empirical world, or when we discuss “messier” beliefs like those in the humanities.