The website, fivethirtyeight.com, recently published an interesting article that helps explain the problem of truth: http://fivethirtyeight.com/features/science-isnt-broken/. Statistical analysis makes proving an empirical hypothesis difficult. The problem here is that truth is a binary value: something is either true or false. We can’t deal in half-truths. Yet, half-truths compose most of our world. Often, it’s the best we can do in answering difficult questions. This is why we rely on statistics to describe so much of our experience: it is a type of analysis that deals with questions that don’t fit well into a binary, true-false, yes-no, type of answer.
Statistics often provide scientists a result, which is often publishable, but that’s not always the same thing as an answer to the question (1). A data set may provide many publishable results that offer conflicting or even contradictory answers to a scientist’s question (1). N. N. Taleb makes a similar point in his analysis of decision making under complexity: we have considerable difficulty predicting what the correct answer is, and often, we can’t know what the correct answer is, due to limitations in our ability to predict the results of a given event. Statistical distributions for some events are highly unpredictable, which requires extensive observations before any meaningful or useful conclusions can be drawn.
Moreover, subjectivity influences a researcher’s results, which makes Truth a difficult commodity to produce. Daniel Kahneman’s book, _Thinking Fast and Slow_, helps explain how the human mind interferes with our supposedly pure perception of the world, which makes our determination of truth difficult, if not impossible. A better measure of scientific knowledge might be utility: what can we do with this information. Utility is not a binary value that we assign to knowledge, unlike truth. Moreover, utility can be empirically tested, like truth or validity.
In the history of science, most results turn out to be false, at least in part if not completely. John Ioannidis supports this idea, stating that most published findings are false (1). “The important lesson here is that a single analysis is not sufficeint to find a definitive answer. Every result is a temporary truth, one that’s subject to change when someone else comes along to build, test and analyze anew” (1).
We ought to focus our efforts on developing the utility of belief systems that we use in our lives, rather than bickering about truth. This fosters a more honest reporting of what science does on a daily basis, removing the faith that some folks hang on the sciences’ methods. Bad repoting of scientific method(s) gives society a false sense of accomplishment and power regarding the predictive power of these methods. “The scientific method is the most rigorous path to knowledge, but it’s also messy and tough” (1).