Thoughts on Happiness

Dan Gilbert has written a book and given a TED Talk about happiness. His starting point is that humans are the only animals who can imagine future scenarios and develop preferences about which future scenario we want to experience. This is a powerful skill that we have. However, Gilbert goes on to claim that we’re famously bad at choosing futures that will make us happy, even when we’re told that the choice we’re likely to make will make us less happy.

Gilbert reviews a bevy of studies that show we’re bad at choosing future experiences that make us happy. For example, we are generally less happy in situations that we know we could have chosen differently, yet we typically prefer to be in situations that offer the option to make a different choice — even when we know that this will likely make us less happy with our decision. But wait, there’s more: we’re also bad at imagining how happy we’ll be in different scenarios. If we imagine whether we’d be happier in the long run with winning the lottery or being a paraplegic, those of us who aren’t already paraplegic) will almost certainly choose the future in which we win the lottery, but research shows that lottery winners and paraplegics are equally happy several months after their respective accidents. Gilbert says we are equally happy after an unexpected tragedy and an unexpected windfall because we have a psychological immune system that works to maintain our long-tern happiness in the face of extreme events. So, we prefer situations we know will make us less satisfied with our choices and we are bad at guessing how happy we’ll be in the face of extremely events. What are we supposed to do now?

Gilbert plays the role of a consummate scientist in his writing and speaking. He doesn’t cop to being a self-help guru, and he goes even further by staying mum about practical applications of his research. He’s simply reporting what scientists have found in their research, how frustrating. Science is famously descriptive. Scientists report their findings, and if they’re feeling generous, they point towards areas of further research. But that is the role of science: to tell us how things work and how events occur as objectively as possible. The problem with this method is that it is easy for people to mistake description for prescription. The misuse of Vilfred Pareto’s discoveries of wealth distribution is a great example. Pareto discovered several situations in which twenty percent of the population owned eighty percent of the assets, and he demonstrated some interesting mathematical permutations on this observation. Subsequently, some writers have taken this observation about statistical distributions as a heuristic for leading one’s life. This is fallacious thinking at its finest: simply because we observe an interesting event over here doesn’t imply that the same event applies or occurs over there too. Gilbert is trying to avoid this kind of self-help scientism by remaining silent on what to do with this information, but simply being told by a scientist that we suck at making decisions about happiness isn’t terribly useful. We still want some help making use of his data.

Fortunately, we have a way around this epistemic void without resorting to new-agey views about science and mathematics:

First, Gilbert explains that we’re bad at making choices about extreme events, which shouldn’t be surprising because extreme events don’t happen very often, so our ability to interpret them based on past evidence will be limited because we don’t have much information available to us. This reasoning is tautological, but a good empiricist ought to know when we don’t have enough information to work with, and when we don’t have enough information, our choices are to change our method or get more data. In our case, changing our methods will get us to a better place with less work. Let me  explain a little more. Gilbert admits we are better at choosing between smaller, shorter-term options about our happiness. Moreover, Gilbert says that our happiness is more affected by more frequent, less extreme events than it is by rarer, more extreme events. This combination of greater effect and better forecasting for more common events implies that we should focus on the small stuff to have a larger impact on our happiness.

The idea that we can better address small problems resonates with ideas N.N. Taleb elaborates in his book, Anti-Fragile. We ought to look at decisions about happiness as making a series of small decisions that compound to a larger result. Because we have more and better experience with small, frequent problems, rather than extreme, rare problems, and if we can decompose big problems into smaller problems, we’ll arrive at a better, more durable solution. For example, rather than focusing on losing twenty-five pounds of fat in a year, it’s simpler and more concrete to focus on changing one’s daily diet and exercising for thirty minutes three times per week. Without even stating a goal of losing twenty-five pounds, simply focusing on improving your diet and increasing your exercise, the goal would likely be realized. And even if you didn’t meet the goal, you’d be a healthier more toned or muscular person regardless of your body weight, no yo-yo dieting required. So, the first step towards happiness is focusing on small, manageable problems.

Second, We can further address the problem of how to be happy by studying psychological experiments about the amount of pain caused by losing versus winning. Daniel Kahneman and Amos Tversky conducted these experiments and Dan Ariely popularized them in his book Predictably Irrational. In essence, we don’t like losing about twice as much as we like winning, which gives us a hint at a practical approach towards happiness: focus on losing less, rather than winning more. In other words, lose less, and lose less often, and you’ll be probably be less unhappy, which is an acceptable start towards being happy, as far as I’m concerned.

Combining these two observations gives us the following maxim:
To be happy, choose to put yourself in situations that make you less unhappy
by focusing on small, manageable problems.
This advice may seem trite and obvious, but it a second look. Browse a social media or news site for fifteen minutes, and you will likely concede that we generally choose to give our f—s to large, difficult to manage problems that we hope will make us happy: worrying about the future president, for example. This is the opposite of my recommendation. So why not give it a shot? Try being less unhappy about the little stuff you can affect: see what comes about.

We don’t study what we’re good at

(Dear Grammar Nazis, never mind the preposition hanging off the end of that title. We’re going for colloquial English today.)

I’ve spent most of my life attending and working in educational institutions, which has given me lots of time to observe academic professionals. While many of them are experts in their fields, I’ve found something ironic: academics don’t study what they are good at, or perhaps, academics aren’t good at what they study. In other words, the subjects that interest people aren’t those that they feel they have mastered. This is precisely why people study those subjects: they want to improve at something they don’t fully comprehend yet; they want to know more about something. Of course, people tend to improve, and even become accomplished, at something they spend a lot of time practicing, but that doesn’t necessarily mean they’re “good” at it.

A couple examples might help illustrate my point. First, there are many professors of communication studies where I work. One would assume that these professors are good at communicating. After all, they have been studying communication long enough to earn a Ph.D., yet many of these people are famous for terse, cryptic replies in emails, or no reply at all after they ask for assistance. In short, there are experts in communication studies who have communication problems. There are also misanthropic anthropologists, unreasonable philosophers, racist ethnic studies professors, and I’ve also worked with a Ph.D. computer scientist who couldn’t accurately diagnose a networking problem.

Second, in the movie Good Will Hunting, Robin Williams has a great monologue about Freud doing enough cocaine to kill a small horse, hinting that at least one psychiatrist has some addictive tendencies as well as some pretty colorful theories about the human mind. We’ll assume that Williams’ monologue is scientifically accurate for our purposes here, or at least anecdotally useful. Freud’s colleague, Jung also had some unusual ideas about human consciousness, as well as several extramarital affairs and possibly a mental disorder. Certainly, Freud and Jung have contributed much to the fields of Psychology and Psychiatry, but if their personal histories are any indication, they may not of have been the best examples of mental health.

What is going on here? Why are experts apparently inept at practicing what they preach, so to speak? The answer, I submit, is that we don’t study what we’re good at. Rather, we study what interests us, and along the way, we might gain some proficiency in our chosen course of study. However, there is a difference between knowing something well and doing it well. For example, having a deep understanding of music theory doesn’t let me immediately pick up a saxophone, trumpet, or guitar and play them like John Coltrane, Miles Davis, or Jimi Hendrix — even being able to write music for those instruments doesn’t ensure my ability to play them. Conversely, some musicians don’t understand music theory, yet they can play their instruments better than folks who know how to play that instrument and know music theory. In other words, performance and knowledge aren’t the same thing: knowing how to play the piano, i.e. pushing appropriate white and black keys in rhythm to create music, is different than being able to apply that knowledge. There is a similar situation going on with the communications professors and psychiatrists. These people know a great many things in the fields they study, but performing that knowledge is a different task entirely.

This is where Aristotle’s concept of wisdom might be useful. Aristotle distinguishes two types of wisdom, theoretical and practical. To paraphrase Aristotle’s point, theoretical wisdom is knowing facts about the world, and practical wisdom is knowing how to live well. We might say these two categories of wisdom are ‘knowing’ and ‘doing’. The professors and psychiatrists from earlier have theoretical wisdom and little practical wisdom: they know a great many things, but they don’t seem to apply that knowledge very well. Practical wisdom is knowing what to do at the right moment: talented musicians who play their instruments very well without knowing the theory behind their performance possess practical wisdom without much theoretical wisdom.

This two-headed wisdom monster presents a problem: how do we have both practical and theoretical wisdom? Philosophers have been bickering for millennia about wisdom, so can we even trust that they know what they’re talking about? This isn’t a question I’ll pretend I can answer, especially in a single blog post, but it’s interesting food for thought. It’s something to strive for.

The Problem of Truth

The website, fivethirtyeight.com, recently published an interesting article that helps explain the problem of truth: http://fivethirtyeight.com/features/science-isnt-broken/. Statistical analysis makes proving an empirical hypothesis difficult. The problem here is that truth is a binary value: something is either true or false. We can’t deal in half-truths. Yet, half-truths compose most of our world. Often, it’s the best we can do in answering difficult questions. This is why we rely on statistics to describe so much of our experience: it is a type of analysis that deals with questions that don’t fit well into a binary, true-false, yes-no, type of answer.

Statistics often provide scientists a result, which is often publishable, but that’s not always the same thing as an answer to the question (1). A data set may provide many publishable results that offer conflicting or even contradictory answers to a scientist’s question (1). N. N. Taleb makes a similar point in his analysis of decision making under complexity: we have considerable difficulty predicting what the correct answer is, and often, we can’t know what the correct answer is, due to limitations in our ability to predict the results of a given event. Statistical distributions for some events are highly unpredictable, which requires extensive observations before any meaningful or useful conclusions can be drawn.

Moreover, subjectivity influences a researcher’s results, which makes Truth a difficult commodity to produce. Daniel Kahneman’s book, _Thinking Fast and Slow_, helps explain how the human mind interferes with our supposedly pure perception of the world, which makes our determination of truth difficult, if not impossible. A better measure of scientific knowledge might be utility: what can we do with this information. Utility is not a binary value that we assign to knowledge, unlike truth. Moreover, utility can be empirically tested, like truth or validity.

In the history of science, most results turn out to be false, at least in part if not completely. John Ioannidis supports this idea, stating that most published findings are false (1). “The important lesson here is that a single analysis is not sufficeint to find a definitive answer. Every result is a temporary truth, one that’s subject to change when someone else comes along to build, test and analyze anew” (1).

We ought to focus our efforts on developing the utility of belief systems that we use in our lives, rather than bickering about truth. This fosters a more honest reporting of what science does on a daily basis, removing the faith that some folks hang on the sciences’ methods. Bad repoting of scientific method(s) gives society a false sense of accomplishment and power regarding the predictive power of these methods. “The scientific method is the most rigorous path to knowledge, but it’s also messy and tough” (1).

1. http://fivethirtyeight.com/features/science-isnt-broken/

Ignorance

Much of our experience of the world is shaped by ignorance. Whether it is a driver yelling at a cyclist because one of them doesn’t know the traffic laws, or an 18th century doctor bleeding a patient because he doesn’t understand the causes of illness, ignorance frequently causes harm — even if the ignorant are trying to help.

I suppose, in part, this is why we have the phrase, “Ignorance is bliss.” The ignorant don’t know whether they are hurting or helping, and in their view, they are doing the right thing. How often is this the case? How often do we do the wrong thing when thinking that we’re acting on our correct knowledge?

I’m afraid this happens more often than not, but the silver lining is that most situations don’t have extreme consequences for our ignorant actions. A deli worker who misreads your order and makes you a turkey sandwich instead of a ham sandwich isn’t causing great problems for anyone, and this is the kind of scenario that fills most of our lives. Rarely are we in an operating room where we have to make an uncertain decision about how to save a patient’s life. We’ve built long, arduous training programs in an attempt to put the best-trained people in those situations that can have dire consequences if we act ignorantly. These training programs don’t always work, but they help ameliorate some of the damage we can cause due to ignorance.

An extreme reaction to our own ignorance is a type of paralysis. We become afraid to do anything because, if we really dig into it, we aren’t certain about very many things. We don’t help people because we’re uncertain about whether they want help; we don’t communicate with others because we’re uncertain of the outcome. However, this conclusion is as faulty as the assumption that we’re better off remaining ignorant and simply assuming that we’re acting from knowledge.

It seems that the best effort we can make is to try to act on our best knowledge of any situation, while recognizing that we’ll probably make a bunch of mistakes along the way — until we invent a crystal ball, that is.

How do you know?

I’m reading Daniel Kahneman’s book, Thinking Fast and Slow. If you’ve read other books on behavioral economics and decision making — such as Fooled by Randomness, Predictably Irrational, or Anti-Fragile — this book will be an interesting expansion of the ideas presented in those books. But let me tell you, chapter 21 is where this book is at.

In chapter 21, Kahneman gives some great applications for decision making heuristics, or approximate, algorithmic tools that help a person make a decision under most circumstances. He explains how one researcher, Orley Ashenfelter, developed an algorithm to judge whether a particular vintage in Bordeaux, France will be valuable to collectors using only three variables: the amount of rain fall the preceding winter, the temperature and rainfall in the summer growing season, and the chateau producing the wine. Kahneman claims that this algorithm explains 90% of the value of a particular vintage of Bordeaux, and Ashenfelter says the weather explains 80% of Bordeaux’s quality (as measured by price at auction) and chateau explains 20%. Kahneman goes on to explain how simple algorithms often do a better job of predicting complicated situations than complex statistical models or human experts do: broad stock market returns, price performance of individual stocks, the success of a proposed scientific research program, political situations, hiring a new employee. I’m thrilled to know that there are tools we can use to make better decisions in areas that typically baffle people. I find it odd that most people ignore these tools and continue making unnecessary errors..

Kahneman does note that people can predict some areas of human experience, but these areas are predictable and controlled: things like house fires, chess games, and other situations that change in well-documented ways can be understood and predicted by human experts. Taleb, in Anti-Fragile, explains the difference between the predictable and unpredictable situations that people encounter using a metaphor of quadrants.

This image shows that situations with complex pay-offs and unknown statistical distributions, such as stock market price performance and political events, are unpredictable and changes in outcome can be drastic. However, chess games and house fires are more predictable because their behavior is less volatile: their changes are less extreme because we can better understand those events.

It is particularly pertinent to philosophy that statistics play a key role in understanding how people know about the world, and most theories of knowledge (i.e. epistemological theories) ignore the importance of statistics in our knowledge. For example, it is rare for anyone to know something with 100% certainty: even the force of gravity on Earth fluctuates in its strength over its surface, although most high school graduates will tell you without hesitation that the rate of gravitational acceleration on Earth is 9.8 m/s squared. However, the mathematical constant of gravity is good enough for nearly all people living on Earth. Most of us will never need to know that the force of gravity is weaker on top of Mount Everest and in Kuala Lumpur, or stronger in Oslo, Norway and Mexico City, Mexico. Still, the fact is that we often don’t know what we think we know: in other words, we are often less than 100% certain of many facts that we would say we know for certain. However, as Taleb’s diagram shows, this uncertainty is trivial in most “quadrants” of our lives. The “fourth quadrant” is the domain where that uncertainty can come back to bite us.

The implications of this over-confidence in our knowledge is important. It’s well-documented that most finance experts aren’t as good at picking stocks as they say they are, and that most political pundits don’t have a clue about where the next political crisis will next erupt. Kahneman covers this in his book, and you can find other authors documenting the same information. However, we need to get a handle on how much to trust what someone is telling us. How do we do this? How do we know what we know?

Philosophers talk about knowledge in terms of “justified true belief”. This definition of knowledge requires that a belief must be justified and valid. The concept of truth is a logical value, which provides a rational support for holding a belief. Justification helps explain why we ought to hold a belief by showing how the belief applies to the empirical world. In other words, truth is an abstract value of knowledge and justification ties that abstract value to some support in the empirical world. It seems to me that the demonstrating the validity of a belief is relatively simple compared to its justification. Moreover, validity can be a trivial value: it’s possible to show how many things that don’t exist are valid. For example, this is a valid, but empirically false, useless, and meaningless syllogism: “All unicorns poop rainbows. I am a unicorn. Therefore, I poop rainbows”. Proving that a belief is valid is useless if that belief doesn’t have some application to the empirical world. Consequently, most debates circle around justifications for a particular belief rather than its validity.

Some might say that the philosophical (or possibly religious) concept of Truth applies to justification, because a true and valid argument must apply to the world we inhabit. However, truth is a difficult concept to apply to justification because so much of our previous knowledge has been replaced with more accurate versions, as we found in our gravity example. Consequently, it seems cleaner and easier to talk about justification in terms of testing whether a belief applies to the empirical world. The methods of testing that are beyond the scope of this post, but I may cover it in another post.

Statistics come into play in justifying one’s knowledge. Sometimes those statistics are trivial: how likely is it you’ll need to eat breakfast tomorrow morning? And other times, those statistics are more critical: who likely is it you’ll have enough money saved and activities planned to make life worth living if you retire tomorrow morning? Unlike Frege’s logical calculus or parsing syllogisms, showing that a belief is justified is difficult. It requires a demonstration that the belief is well supported by empirical observations, but this will rarely be a deduction. More likely, it will be an inference. Political platforms, investment ideas, and religious ideologies live in this space, and much energy has been spent attempting to justify these kinds of beliefs.

The point of all this prattle is that it is useful to consider the situation in which we find ourselves and consider whether we’re thinking about the situation in the correct way. Is this situation one where being approximately correct is good enough, or if I’m wrong, will there be dire consequences? Also, it’s useful to know how you know something you believe: can this belief be deduced as we do in math and logic, or is this belief something that requires further justification, as we do in engineering, when applying math and logic to the empirical world, or when we discuss “messier” beliefs like those in the humanities.

Cryonics, the atheist’s second coming?

Wait But Why (WBW) has written an interesting post about Cryonics, which is the practice of preserving a body, or a part of the body, in the hope that future humans will resuscitate that body, so the person associated with that body can continue living a happy, healthy life. I learned a bunch about what this process of “freezing” yourself entails, as well as the motivations that go into actually paying for, and doing, this. But, one thing struck me most of all: this sounds remarkably similar to the Christian Rapture.

Cryonics hinges on the faith that people in the future will solve all of their, and our, petty problems before developing a way to bring back to life folks who were beyond medical help in the past. According to WBW’s post, the folks who have paid for this admit there is risk in the whole plan, but they believe that the risk (and the cost associated with trusting a company to maintain their bodies) are worth the pay off of continuing to live life once society has solved the problem of mortality.

It seems to me that clientele for this service must have a pretty strong commitment to scientific materialism, believing that the human body is the foundation of consciousness, and that once the body has degraded sufficiently, life as any human would want to live it is no longer possible. If clients held ontological beliefs that entailed any sort of mind-body dualism, soul theory, or ontological idealism, then paying a large portion of one’s life savings to preserve their material body should be absurd. In other words, the mind must irreversibly cease or decompose with the body for cryonics to make sense.

What’s more, this does something interesting to the traditional Christian view of the Rapture. As you probably well know, Christians believe that dead believers will be reanimated around the time that Jesus Christ returns to earth, and then they will ascend to heaven to live forever. Cryonics moves this trope to the material world, attempting to recreate this process through scientific methods which don’t exist yet, and I find this fascinating. Of course, Christians don’t need to pay for cryonics because they’ve “bought” their resurrection with faith in God, so the purchase doesn’t make financial sense. However, those who don’t believe in the Christian God need to trust someone else to give them eternal life.

But so much for an explanation of cryonics, and an analysis of the beliefs that might justify it. What I find interesting is the need to cling to life to the point that one is willing to pay hundreds of thousands of dollars to gamble on waking up in the future. Something bothers me about this whole endeavor. To oversimplify my feelings, it seems that folks who are willing to pay for cryonics wake up to this realization:

16eclsThen, after digging around in the couch for lots of spare change, they buy an insurance policy in the name of the corporation that’s going to “care” for their corpse while they wait for the world to reanimate them.

To me, this feels like the metaphysical equivalent of a Ponzi scheme. If you wouldn’t give your life’s savings to a Wall Street firm to double your money, why would you give it to an insurance company or a cryonics company? The pay-off is unknown, and maybe impossible. This seems like gambling on an enormous scale. What am I missing?

Moreover, there’s a problem with the second law of thermodynamics. It takes energy to keep human body parts in liquid nitrogen, and once those people are reanimated, they will also require energy to stay living. Assuming that future humans aren’t perpetual motion machines, we can’t all live forever. This makes cryonics a niche industry by necessity. The day it is popular, we’ll need to solve an over-population problem that’s an order of magnitude larger than the one we currently face. The success of cryonics hinges on its high cost and low adoption rate, which unless I’m missing something, seems like a selfish way to go through the world.

Is the possibility of death so daunting, or life so amazingly good, that you won’t move over and make space for the next generation that is waiting for the resources you’re consuming? I expect that a cryonics proponent believes that humans will solve the population problem before they decide to reanimate the dead, and I hope they’re correct. But that means you’re going to keep waiting, putting more time and energy-cost between you and living — more time that allows for disaster to strike your cold, hard body.

Applied Philosophy: Aging

I feel like my father is grasping. Over seventy years old, I believe that he feels the good things in his life are slipping through his fingers. I imagine that this feeling is terrifying.

As if caught in quick sand, he struggles to stop his descent. Grasping at expensive food and wine, as well as regular international travel, he ignores knee replacement surgery that would allow him to walk, as well as close friends in his home town. Regularly trading exercise for rich food, he sinks deeper, and I’m not certain that he’s sinking with a smile on his face. There is a desperation in his actions.

I’m still young enough that the sand grains draining from my cupped hands aren’t as noticeable. However, I hope that advanced aging feels less like a continuous theft — of health, mobility, sensation, and time. There are choices we can make early in life that shape our later years. Regular exercise, diet, and social interactions are no guarantee that we’ll live long, happy, and healthy lives. But they help tip the odds in our favor. Given that death is the surest event in life, I’ll happily work to tip the odds.

I’m afraid that my dad will die feeling like he’s been cheated. I’m afraid he’ll be bed-ridden, like his father: his mind taken by Alzheimer’s, his mobility stolen by failed knees, and his health ruined by a rich diet with little exercise. Fear of pain keeps him from replacing his knees. Fear of missing good food and drink keeps him from changing his diet. He feels he can’t do anything about this now, and I’m afraid he’ll make this a truth the longer he waits.

I’m afraid that my Dad is afraid: of aging, pain, death. All of these are valid fears. But fear can lead to grasping at comforting experiences, and I’m not sure that food, wine, and travel can quell fears about aging, pain, and death. Each new rich, luxurious experience stands to remind him of what he’s losing. Unable to catch all the experiences that slip through his fingers, he fears the coming end. This vicious cycle feeds on itself. Eventually, his feeling that he can’t do anything about his situation will clap shut like a trap, and his feeling will become reality. What’s more, his range of options narrows, like prey backed into the trap by the hunting party. The farther he moves into the trap, the more heroic he’ll have to leap to avoid it, and exhausted by the chase, the less able he’ll be able to leap.

I hope I’m missing something. That he’s playing a calculated game, in which he lays down his cards on Death’s table, laughing. I know that’s not how my father lives his life, but I hope that’s what he’s doing.