Can Loss Aversion Explain Ambiguity Aversion? Theory and Experiments
Work in progress by Zedekiah G. Higgs
This project is still in early stages. If you are interested in looking at some of my early work on this subject, including a formal development of the model and some numerical examples demonstrating the model's ability to explain observed behavior across a variety of settings within the context of Ellsberg-urns problems, you can check out the following document:
Read some of my early work: Some Early Work
My plan is to continue developing the model and eventually design some experiments.
Abstract (for specialists)
Many theoretical models have been developed to explain ambiguity aversion. This paper proposes a new model that incorporates the insights of loss aversion into a two-stage model of ambiguity. In the first stage, a probability distribution \(\mu\) over the state space \(S\) is realized, being randomly drawn with probability measure \(M\) over \(\Delta (S)\). The probability measure \(M\) represents the decision maker's personalistic or subjective probabilities associated with drawing each \(\mu \in \Delta(S)\). In the second stage, a state space \(s\) is then drawn based on the probability distribution \(\mu\) realized in the first stage. As in previous models, it is assumed the decision maker (DM) views the two stages as separate and distinct. However, unlike previous models, it is also assumed that in the first stage the DM evaluates potential second-stage lotteries with respect to some reference lottery, and these evaluations are assumed to exhibit loss aversion. That is, each possible second-stage lottery is evaluated with respect to the reference lottery, with second-stage lotteries that are 'worse' than the reference lottery being considered losses (and therefore being more heavily weighted in the decision-making process). While loss aversion is typically applied to potential payoffs, in this setting loss aversion is generalized to apply to potential first-stage outcomes (i.e., loss aversion is used to evaluate, with respect to the reference lottery, each potential single-stage lottery \((...;x_j,\mu(E_j);...)\) induced by the random drawing of \(\mu \in \Delta(S)\) with probability measure \(M\) in the first-stage lottery).
To enable the DM to evaluate second-stage lotteries as being either 'worse' or 'better' than the reference lottery, the model follows Segal (1987) in assuming that individuals have a preference function \(V(\cdot)\) defined over single-stage lotteries and they are able to use this preference function to determine the certainty equivalent of each potential single-stage lottery that could occur in the second stage. Thus, for a given act \(f(\cdot) = (...;x_j,E_j;...)\) and each probability distribution \(\mu \in \Delta(S)\), the DM calculates the certainty equivalent \(CE(f,\mu)\) for the lottery induced by \(\mu\), such that \[ V(CE(f,\mu), 1) = V(...; x_j, \mu(E_j); ...). \] The DM is assumed to have a different preference function, \(\psi(\cdot)\), over first-stage lotteries (i.e., over two-stage lotteries), and \(\psi(\cdot)\) is assumed to exhibit loss aversion. The certainty equivalent of each second-stage lottery is evaluated with respect to the certainty equivalent of the reference lottery. Second-stage lotteries with greater certainty equivalents than the reference lottery are considered 'gains,' while those with lower certainty equivalents are considered 'losses.'
The model is of course sensitive to the choice of algorithm used to determine the reference lottery. However, in comparisons of an ambiguous gamble and an unambiguous gamble, there is a straightforward reference lottery to use in the evaluation of the ambiguous gamble: the unambiguous gamble. (In the evaluation of the unambiguous gamble no reference lottery is needed, since the unambiguous gamble is a single-stage lottery evaluated using \(V(\cdot)\).) This framework provides an intuitive explanation for why a DM would prefer an unambiguous urn over an ambiguous urn in the classic Ellsberg two-color urn problem: while the ambiguous urn provides the DM with a chance at getting better odds, it also runs the risk of providing worse odds, and due to loss aversion the potential downside outweighs the potential upside.
The model is also capable of explaining other puzzles, such as why individuals may instead display ambiguity preference in certain situations, as well as the more recent finding that individuals prefer larger ambiguous urns over smaller ones.
Not-so-abstract (for curious outsiders)
⚠️ This summary might gloss over some important details.
Suppose you are presented with two urns, each containing a total of two balls. You know that the first urn (Urn 1) has one red ball and one black ball. And you know that the second urn (Urn 2) can only contain red and black balls, but you do not know the exact makeup of the second urn (i.e., it may contain two red balls, two black balls, or one of each). Now suppose that you are offered a bet which will pay you if a black ball is selected, but you must choose which urn to place the bet on. You know with certainty that Urn 1 provides a 50% chance of winning, but Urn 2 is ambiguous (for example, it's possible Urn 2 doesn't have any black balls, in which case your odds of winning would be 0). Which urn would you prefer to place the bet on?
This setup is known as Ellsberg's two-color urn problem, and different variations of it have been studied a lot. In practice, people tend to prefer the bet on the unambiguous urn (Urn 1), where they know they have a 50% chance of winning. This by itself is not an issue---it is perfectly plausible, for example, that you believe Urn 2 has zero black balls (and therefore provides a 0% chance of winning). However, when individuals are then provided with the exact same bet on drawing a red ball, they still prefer Urn 1. Now this is an issue: if you believe Urn 2 has zero black balls (and therefore prefer Urn 1 in the first bet), then you must believe Urn 2 contains 2 red balls, in which case you should definitely prefer Urn 2 in the second gamble. Instead, people tend to prefer the unambiguous urn for both gambles. This behavior has been labeled ambiguity aversion.
Many theoretical models have been developed to help explain ambiguity aversion. In this project I propose a new one that incorporates the insights of loss aversion, or the tendency for individuals to dislike losses more than they like gains. The intuition of the model is as follows. In the simple two-urn problem I have discussed, Urn 2 provides three possible color distributions: it could consist of either (i) 2 black balls and 0 red balls \((2b, 0r)\), (ii) 1 black ball and 1 red ball \((1b, 1r)\), or (iii) 0 black balls and 2 red balls \((0b, 2r)\). Thus, if you are faced with a decision about which urn to bet on a black ball being drawn from, choosing Urn 2 corresponds with three possible outcomes: (i) if the distribution turns out to be \((2b, 0r)\), then you have improved your odds of winning to 100% (from the 50% chance you would get from Urn 1); (ii) if the distribution turns out to be \((1b, 1r)\), then your odds of winning are the same as with Urn 1; and (iii) if the distribution turns out to be \((0b, 2r)\), then you have decreased your odds of winning to 0%. So there is some chance you improve your odds by choosing Urn 2, and there is some chance you decrease your odds. However, because of loss aversion, you dislike the prospect of decreasing your odds more than you value the chance of increasing your odds. As a result, you will prefer to gamble on Urn 1. Furthermore, this argument is identical for a gamble on a red ball being drawn, so you will prefer to choose Urn 1 in both cases.
The model is nice because it is also able to explain other puzzles, such as why individuals display aversion to ambiguity in some settings but seek it in others, as well as why individuals demonstrate a preference for larger ambiguous urns over smaller ambiguous urns. Plus, I think it provides a fairly intuitive explanation for observed behavior. Because I am an experimentalist, the plan is to also design some experiments to test the model, but I'm still working on that!