Why ‘behavioral economics’ is neither ‘behavioral,’ nor ‘economics’
(Second of two parts)
Over the years, some economists carried out laboratory experiments and argued that people are inconsistent in ways they assess risks and probabilities. They concluded that it is misleading to rely on other economists’ view of risk, and that economics cannot be separated from psychology. This field of study is known today as “behavioral economics” (Kahneman and Tversky its founders), which, as briefly shown here, is neither “behavioral,” nor “economics,” nor makes sense.
Here are a few representative examples, ones that Kahneman and followers rely on to illustrate people’s misperception of “risk”; their inability to assess probabilities and by implication, having preferences easily manipulated. They are all mistaken, and if applied, lead to bad policies.
These academics ask people to think of a coin toss, where with 50/50 probability they could either win $15,000 or lose $10,000. Many answer that they would refuse to take such bets. People are then told to think they have $1 million, the choice now being between $990,000 and $1,015,000. Academics find that now more people would take the bet. Kahneman and company conclude that “when you think in terms of overall wealth, you have a different attitude to risk. Gains and losses loom less large.” But what can such laboratory experiment reveal about people’s views of risk? Nothing.
First, if you do not tell people how much wealth they own to start with when facing bets, you should not be surprised to get answers all over the map. One person imagines himself having $10,000, whereas another $10 million. For the first the potential outcomes of the coin toss imply an imaginary chance of becoming homeless and starving, or, if he wins, having access to better food and health care. For the imaginary millionaire though, the option of losing $10,000 brings to mind entertainment options, and absolutely nothing about risk: “If I lose $10K, I’ll spend one day less renting the yacht, and if I win, I’ll buy a nice present for my” – to be politically correct – “partner”.
Elsewhere they argue that emotions heavily influence decisions. No doubt. But their illustrations do not support their views. They say: Consumers drive across town to save $5 on a $15 calculator, but not save $5 on a $125 coat – even if – quote – the gain is “precisely the same.” But Kahneman and Tversky take this example from thin air. Both calculators and coats are “durables.” Saving $5 on a $15 item, you increase the return by 33%, whereas on the coat by a mere 4%. Make other calculations if you wish, but by no stretch of the imagination are these two “precisely the same.”
Kahneman and company also did the following type of surveys to suggest that people are bad at assessing probabilities and thus dealing with risks. They gave Harvard Medical School doctors and staff this hypothetical scenario: A woman asks for an HIV test. The doctor tells her that one in a thousand at her age and having similar backgrounds, is infected. The doctor also tells her the test is 95% accurate. When they asked the participants: If the woman tested positive, what are the chances she is infected? Most physicians answered: 95%, whereas the correct answer is around 2% (basic Bayesian calculation).
Do such surveys prove anything about actual behavior? Will these doctors behave as if the probability was 95%? No, because there are no consequences giving wrong answers in laboratory experiments and such surveys, whereas in real life they would be sued and lose their license. The physicians will ask statisticians – who do not appear as option in the survey (just as family, Consumer Reports make no appearance in Akerlof’s “lemon” piece,” discussed in Part I)
Briefly: In artificial settings people give superficial answers. In life, physicians, aware of their statistical limitations, ask statisticians. As to too numerous laboratory experiments to cite where questions refer to losing or gaining millions of dollars – the contradictions have the same source: Why would participants bother to answer seriously? And knowing nothing about the respondents, how can their answers be reliably interpreted?
A British Royal Commission on gambling indeed concluded that gamblers – real ones, not the imaginary, laboratory variety – did not over-estimate the chances to win, an argument often used to rationalize prohibitions. This should not be surprising, since even in the eighteenth and nineteenth century information about probability distributions of winning prizes was widely disseminated: No asymmetric information.
Stephen Stigler, in his 2003 Ryerson lecture titled “Casanova’s Lottery” (yes, that “Casanova”) found that in the 18th and 19th centuries too there was no evidence that people were betting “over their heads” and ignorant of probabilities of winning. He found detailed statistics in a 1834 book, Almanach Romain sur la Loterie de France. The book summarizes the winning numbers of every draw in France between 1758 and 1833. The winning numbers and the geographic distribution of winners were randomly distributed, implying, as Stigler notes, that there was no fraud, even though public authorities at the time made such claims.
Why laboratory experiments mislead, consider this: If you own $100,000, and face the prospect of 50% chance of losing $10,000, you can expect people in different age groups to give different answers. A young one may expect to recoup the loss, whereas someone between sixty-five and death may not. The latter may buy a lottery, the younger – not. Attributing the different answers to differences in “preferences” (not having asked about age), and which anyway these economists believe can be easily manipulated, should not be used to rationalize any policy.
What is disturbing too about these models and laboratory research, is that there has been so much evidence about what people actually have done for centuries in different societies; the private and institutional solutions they invented to deal with risks and uncertainties, that wasting time and money experimenting in laboratories, where people could anyway not lose or gain millions, or take into account institutions related in real life to the topics of the experiments – made no sense.
While I fully agree that it is impossible to examine people’s behavior by putting them in the arbitrary boxes drawing on today’s accidental academic disciplines and their jargons. That does not imply that surveys or laboratory experiments are the solution. I would not make this statement if I did not find alternative ways of shedding light on human behavior and solve the variety of problems these artificial experiments were supposed to. In real life people’s behavior across countries and time displayed consistency, and not a spineless, too easily malleable human mind. Yes, people fell occasionally into traps of exuberance, deception, radicalism. But this happened when rulers pursued disastrous policies, destroying the options to create institutions to stabilize behaviour.
The question becomes: why is so much nonsense then flourishing in academia? Part I of these series dealt with other nonsense, and earlier this year, a series dealt with the pseudo-science of “macro-strology.”
Those articles gave partial answers. The models and jargons have one thing in common: they all rationalize increasing politicians’ role in society. Governments are assumed to never make mistakes or succumb to “animal spirits,” and academic elites have the answers. If preferences are easily manipulated and people are ignorant of dealing with risks, the inevitable conclusion is: Let Machiavelli’s Prince and regulators take over.
This view gained currency gradually, as universities have given up on selection of students and faculty. It happened as an unanticipated consequence of the accidental 1958 National Defense Education Act in the US that brought about the heavily subsidized expansion of universities. Many predicted decades ago that this would lead to the decline of learning. Subsidies produced an inflation of diplomas, not brains, as recent events on US and Canadian campuses amply demonstrate.
It happened before. When Gulliver gets to Laputa, he finds the country in ruin. Yet academics continued to work on “extracting sunbeams out of cucumbers” with equally blinded students.
(The first part of this two-part series was published Asia Times on Dec. Dec. 6)
Reuven Brenner holds the Repap Chair at McGill’s Desautels Faculty of Management. The article draws on his Educating Economists, Force of Finance and World of Chance.
The opinions expressed in this column are the author’s own and do not necessarily reflect the view of Asia Times.
(Copyright 2015 Asia Times Holdings Limited, a duly registered Hong Kong company. All rights reserved. Please contact us about sales, syndication and republishing.)