Mark Dean
a

“The Empirical Relationship between Non-Standard Economic Behaviors” (with Pietro Ortoleva) - Proceedings of the National Academy of Sciences, In Press

We study the joint distribution of 11 behavioral phenomena in a group of 190 laboratory subjects and compare it to the predictions of existing models as a step in the development of a parsimonious, general model of economic choice. We find strong correlations between most measures of risk and time preference; between compound lottery and ambiguity aversion; and between loss aversion and the endowment effect. Our results support some, but not all attempts to unify behavioral economic phenomena. Overconfidence and gender are also predictive of some behavioral characteristics.

PNAS version coming soon. Currently available is an earlier version of the paper under the title: “Is it All Connected? A Testing Ground for Unified Theories of Behavioral Economics Phenomena”

“Rational Inattention, Optimal Consideration Sets and Stochastic Choice” (with Andrew Caplin and John Leahy) - Review of Economic Studies, Forthcoming

We unite two basic approaches to modelling limited attention in choice by showing that the rational inattention model implies the formation of consideration sets -- only a subset of the available alternatives will be considered for choice. We provide necessary and sufficient conditions for rationally inattentive behavior which allow the identification of consideration sets. In simple settings, chosen options are those that are best on a stand-alone basis. In richer settings, the consideration set can only be identified holistically. In addition to payoffs, prior beliefs impact consideration sets. Simple linear equations identify all priors consistent with each possible consideration set. Paper

“Limited Attention and Status Quo Bias” (with Özgür Kıbrıs and Yusufcan Masatlioglu), Journal of Economic Theory, May 2017, 169: 93-127

We introduce and axiomatically characterize a model of status quo bias in which the status quo affects choices by both changing preferences and focusing attention. The resulting Limited Attention Status Quo Bias model can explain both the findings that status quo bias is more prevalent in larger choice sets and that the introduction of a status quo can change choices between non-status quo alternatives. Existing models of status quo bias are inconsistent with the former finding while models of decision avoidance are inconsistent with the latter. We show that the interaction of the two effects has important economic implications, and report the results of laboratory experiments which show that both attention and preference channels are necessary to explain the impact of status quo on choice. Paper

“Satisficing and Stochastic Choice” (with Victor Aguiar and Maria Jose Boccardi), Journal of Economic Theory, November 2016, 166: 445-482

Satisficing is a hugely influential model of boundedly rational choice, yet it cannot be easily tested using standard choice data. We develop necessary and sufficient conditions for stochastic choice data to be consistent with satisficing, assuming that preferences are fixed, but search order may change randomly. The model predicts that stochastic choice can only occur amongst elements that are always chosen, while all other choices must be consistent with standard utility maximization. Adding the assumption that the probability distribution over search orders is the same for all choice sets makes the satisficing model a subset of the class of random utility models. Paper

“Allais, Ellsberg and Preferences for Hedging" (with Pietro Ortoleva), Theoretical Economics, January 2017), 12: 377–424

Two of the most well-known regularities of preferences under risk and uncertainty are ambiguity aversion and the Allais paradox. We study the behavior of an agent who can display both tendencies at the same time. We introduce a novel notion of preference for hedging that applies to both objective lotteries and uncertain acts. We show that this axiom, together with other standard ones, is equivalent to a representation in which the agent evaluates ambiguity using multiple priors, like in the model of Gilboa and Schmeidler [1989], but does not use Expected Utility to evaluate objective lotteries. Rather, lotteries are evaluated by distorting probabilities as in the Rank Dependent Utility model, but using the worst from a set of distortions. We show that a preference for hedging is not sufficient to guarantee an Ellsberg-like behavior if the agent violates Expected Utility for objective lotteries, and we provide a novel axiom that characterizes the special case of our representation that guarantees ambiguity aversion, linking the distortions for objective and subjective bets. Finally, we show that our representation is equivalent to one in which the agent treats objective lotteries as `ambiguous objects,' and uses a set of priors to evaluate them. Paper

“Measuring Rationality with the Minimum Cost of Revealed Preference Violations" (with Daniel Martin), Review of Economics and Statistics, July 2016, 98(3): 524-534

We introduce a new measure of how close a set of choices are to satisfying the observable implications of rational choice and apply it to a large balanced panel of household level consumption data. This new measure, the Minimum Cost Index, is the minimum cost of breaking all revealed preference cycles found in choices from budget sets. Using this measure we find that while observed violations of rationality are small in absolute terms, households are only moderately more rational than a benchmark of random choice. However, we find significant differences in the rationality of different demographic groups, with larger and older households closer to rationality. Surprisingly, households with more than one household head are also significantly more rational. In contrast to previous work, we document differences between demographic groups while controlling for predictive power. Paper Supplemental Materiel

“Revealed Preference, Rational Inattention, and Costly Information Acquisition” (with Andrew Caplin), American Economic Review, July 2015, 105 (7): 2183-2203

Apparently mistaken decisions are ubiquitous. To what extent does this reflect irrationality, as opposed to a rational trade off between the costs of information acquisition and the expected benefits of learning? We develop a revealed preference test that characterizes all patterns of choice "mistakes" consistent with a general model of optimal costly information acquisition and identify the extent to which information costs can be recovered from choice data. Paper Supplemental Materiel. (A previous version of the paper including some experimental results is available here.)

“Search and Satisficing” (with Andrew Caplin and Daniel Martin), American Economic Review, December 2011, 101 (7): 2899-2922

Many options are available even for everyday choices. In practice, most decisions are made without full examination of all such options, so that the best available option may be missed. We develop a search-theoretic choice experiment to study the impact of incomplete consideration on the quality of choices. We find that many decisions can be understood using the satisficing model of Simon [1955]: most subjects search sequentially, stopping when a “satisficing” level of reservation utility is realized. We find that reservation utilities and search order respond systematically to changes in the decision making environment. Paper

“Search, Choice and Revealed Preference (with Andrew Caplin), Theoretical Economics, January 2011, 6: 19-48

With complete information, choice of one option over another conveys preference. Yet when search is incomplete, this is not necessarily the case. It may instead reflect unawareness that a superior alternative was available. To separate these phenomena, we consider non-standard data on the evolution of provisional choices with contemplation time. We characterize precisely when the resulting data could have been generated by a general form of sequential search. We characterize also search that terminates based on a reservation utility stopping rule. We outline an experimental design that captures provisional choices in the pre-decision period. Paper

“Measuring Beliefs and Rewards: A Neuroeconomic Approach” (with Andrew Caplin, Paul Glimcher and
Robb Rutledge), Quarterly Journal of Economics, August 2010, 125(3): 923-960

The neurotransmitter dopamine is central to the emerging discipline of neuroeconomics; it is hypothesized to encode the difference between expected and realized rewards and thereby to mediate belief formation and choice. We develop the first formal test of this theory of dopaminergic function, based on a recent axiomatization by Caplin and Dean [2008A]. These tests are satisfied by neural activity in the nucleus accumbens, an area rich in dopamine receptors. We find evidence for separate positive and negative reward prediction error signals, suggesting that behavioral asymmetries in response to losses and gains may parallel asymmetries in nucleus accumbens activity. Paper

“Dopamine, Reward Prediction Error, and Economics” (with Andrew Caplin), Quarterly Journal of
Economics, May 2008 123(2): 663-701

The neurotransmitter dopamine has been found to play a crucial role in choice, learning, and belief formation. The best-developed current theory of dopaminergic function is the “reward prediction error” hypothesis—that dopamine encodes the difference between the experienced and predicted “reward” of an event. We provide axiomatic foundations for this hypothesis to help bridge the current conceptual gap between neuroscience and economics. Continued research in this area of overlap between social and natural science promises to overhaul our understanding of how beliefs and preferences are formed, how they evolve, and how they play out in the act of choice. Paper
a

“How can Neuroscience Inform Economics?” (with Ian Krajbich), Current Opinion in Behavioral Sciences, October 2015, Volume 4: 51-57

Neuroeconomics is now a well-established discipline at the intersection of neuroscience, psychology and economics, yet its influence on mainstream economics has been smaller than on the other two fields. This is in part because, unlike neuroscientists and psychologists, most economists are not interested in the process of decision making per se. We argue that neuroscience is most likely to influence economics in the short run by providing new insights into the relationships between variables that economists already study. In recent years the field has made many such contributions, using models from cognitive neuroscience to better explain choice behavior. Here we review this work that we think has great promise to contribute to economics in the near future. Paper

“A Game Theoretic Approach to Multimodal Communication” (with Alistair Wilson and James Higham), Behavioral Ecology and Sociobiology, September 2013, Volume 67(9): 1399-1415

Over the last few decades the animal communication community has become increasingly aware that much communication occurs using multiple signals in multiple modalities. The majority of this work has been empirical, with less theoretical work on the advantages conferred by such communication. In the present paper we ask: Why should animals communicate with multiple signals in multiple modalities? To tackle this question we use game theoretic techniques, and highlight developments in the economic signaling literature that might offer insight into biological problems. We start by establishing a signaling game, and investigate signal honesty under two prevailing paradigms of honest communication - costly signaling and cheap talk. In both paradigms, without further constraint, it is simple to show that anything that can be achieved with multiple signals can be achieved with one.We go on to investigate different sets of possible constraints that may make multiple signals and multimodal signals in particular more likely to evolve. We suggest that constraints on cost functions and bandwidths, orthogonal noise across modalities, strategically distinct modes, multiple qualities, multiple signalers, and multiple audiences, all provide biologically plausible scenarios that theoretically favor multiple and multimodal signaling. Paper

“Testing the Reward Prediction Error Hypothesis with an Axiomatic Model” (with Robb Rutledge, Andrew Caplin and Paul Glimcher), Journal of Neuroscience, October 2010, 30(40):13525-1353

Neuroimaging studies typically identify neural activity correlated with the predictions of highly parameterized models, like the many reward prediction error (RPE) models used to study reinforcement learning. Identified brain areas might encode RPEs or alternatively simply have activity correlated with RPE model predictions. Here we use an alternate axiomatic approach rooted in economic theory to formally test the entire class of RPE models on neural data. We show that measurements of neural activity from the striatum, medial prefrontal cortex, amygdala, and posterior cingulate cortex satisfy necessary and sufficient conditions for the entire class of RPE models. However, activity measured from the anterior insula falsifies the axiomatic model and therefore no RPE model can account for this activity. Further analysis suggests the anterior insula might instead encode something related to the salience of an outcome. As cognitive neuroscience matures and models proliferate, formal approaches that assess entire classes of models rather than specific model exemplars may take on increased significance. Paper

“Axiomatic Methods, Dopamine and Reward Prediction Error” (with Andrew Caplin), Current Opinion in
Neurobiology, August 2008, 18(2): 197-202

The phasic firing rate of midbrain dopamine neurons has been shown to respond both to the receipt of rewarding stimuli, and the degree to which such stimuli are anticipated by the recipient. This has led to the hypothesis that these neurons encode reward prediction error (RPE)—the difference between how rewarding an event is, and how rewarding it was expected to be. However, the RPE model is one of a number of competing explanations for dopamine activity that have proved hard to disentangle, mainly because they are couched in terms of latent, or unobservable, variables. This article describes techniques for dealing with latent variables common in economics and decision theory, and reviews work that uses these techniques to provide simple, non-parametric tests of the RPE hypothesis, allowing clear differentiation between competing explanations. Paper

“Trading off Speed and Accuracy in Rapid, Goal-Directed Movements” (with Shih-Wei Woo and Laurence
Maloney), Journal of Vision, July 2007, 7(5): 1-12

Many studies have shown that humans face a trade-off between the speed and accuracy with which they can make movements. In this article, we asked whether humans choose movement time to maximize expected gain by taking into account their own speed–accuracy trade-off (SAT). We studied this question within the context of a rapid pointing task in which subjects received a reward for hitting a target on a monitor. The experimental design we used had two parts. First, we estimated individual trade-offs by motivating subjects to perform the pointing task under four different time constraints. Second, we tested whether subjects selected movement time optimally in an environment where they were rewarded for both speed and accuracy; the value of the target decreased linearly over time to zero. We ran two conditions in which the subjects faced different decay rates. Overall, the performance of 13 out of 16 subjects was indistinguishable from optimal. We concluded that in planning movements, humans take into account their own SAT to maximize expected gain. Paper
a

“Enhanced Choice Experiments” (with Andrew Caplin), Chapter in The Method of Modern Experimental Economics, Guillaume Frechette and Andrew Schotter, eds, 2015

We outline experiments that improve our understanding of decision making by analyzing behavior in the period of contemplation that preceeds commitment to a …nal choice. The experiments are based on axiomatic models of the decision making process that relate closely to revealed preference logic. To test the models, we arti…cially incentivize particular choices to be made in the pre-decision period. We show how the resulting experiments can improve our understanding not only of the decision making process, but of the decision itself. Our broad method is to make aspects of search visible while retaining the disciplined approach to data that axiomatic modeling best provides. Paper

“What Can Neuroeconomics Tell Us About Economic Decisions (and Vice Versa)?”, Chapter in Comparative Decision Making, Philip Crowley and Thomas Zentall, eds, 2013

Neuroeconomics, or the combination of neuroscience data with economic questions and modeling techniques, has been around for almost 10 years, yet many economists remain sceptical of its value for informing models of economic decision making. This article attempts to define what it is neuroeconomists are trying to do, as well as the explicit criticisms that have been leveled at the project from mainstream economists. I conclude that there is no in principle reason why neuroscience cannot help inform economic modeling, particularly through `inspiration' for new models, and by allowing process models to be tested piece by piece, rather than all at once. However, the fact that we have relatively few examples of either suggests that the project is not an easy one. Paper

“Economic Insights from ‘Neuroeconomic’ Data” (with Andrew Caplin), American Economic Review Papers and Proceedings, May 2008, 98(2): 169-174

No Abstract Paper

“Axiomatic Neuroeconomics” (with Andrew Caplin), Chapter in Neuroeconomics: Decision Making and the Brain, Paul Glimcher, Colin Camerer, Ernst Fehr and Russell Poldrack, eds, 2008

No Abstract Paper

“The Neuroeconomic Theory of Learning” (with Andrew Caplin), American Economic Review Papers and Proceedings, May 2007, 97(2): 148-152

No Abstract Paper

“Why has World Trade Grown Faster than World GDP?” (with Maria Sebastia-Barriel), Bank of England Quarterly Bulletin, Autumn 2004: 310-320

Between 1980 and 2002, world trade has more than tripled while world output has "only" doubled. The rise in trade relative to output is common across countries and regions, although the relative growth in trade and output varies greatly. This article attempts to explain why the ratio of world trade to output has increased over recent decades. It provides a brief review of the key determinants of trade growth and identifies proxies that will enable us to quantify the relative importance of the different channels. We estimate this across a panel of ten developed countries. This will allow us to understand better the path of world trade and thus the demand for UK exports. Furthermore this approach will help us to distinguish between long-run trends in trade growth and cyclical movements around it. Paper
a

“Credit Constraints and the Measurement of Time Preferences” (with Anja Sautmann). Revision requested: Review of Economics and Statistics - Latest Version January 2019

Incentivized experiments are often used to identify the time preferences of households in developing countries. We argue theoretically and empirically that experimental measures may not identify preferences, but are a useful tool for understanding financial shocks and constraints. Using data from an experiment in Mali we find that subject responses vary with savings and financial shocks, meaning they provide information about credit constraints and can be used to test models of risk sharing. We use our model and data to determine that changes in consumption are driven by substantial unsmoothed ‘preference’ shocks, which are quantitatively important relative to income shocks. Paper

“Willingness-To-Pay and Willingness-To-Accept are Probably Less Correlated Than You Think” (with Jonathan Chapman, Pietro Ortoleva, Erik Snowberg and Colin Camerer). Revision Requested: Econometrica - Latest Version January 2019

An enormous literature documents that willingness to pay (WTP) is less than willingness to accept (WTA) a monetary amount for an object, a phenomenon called the endowment effect. Using data from an incentivized survey of a representative sample of 3,000 U.S. adults, we add one (probably) surprising additional finding: WTA and WTP for a lottery are, at best, slightly correlated. Across all participants, the correlation is slightly negative. We also collect data from published, incentivized studies, all run on university students, to analyze the correlation between WTA and WTP, which those studies did not examine. We document a correlation of 0.15--0.2, which is consistent with the correlation for high-IQ participants in our own data. While poorly related to each other, WTA and WTP are closely related to different measures of risk aversion, and relatively stable across time. Models of reference dependence can explain these correlations, but are inconsistent with other aspects of our data, suggesting the need for more theories and empirical studies of the processes of buying and selling. Paper

“Experimental Tests of Rational Inattention” (with Nathaniel Neligh) - Latest Version June 2019

We use laboratory experiments to test models of rational inattention, in which people acquire information to maximize utility net of information costs. We show that subjects adjust their attention in response to changes in incentives in line with the rational inattention model. However, our results are qualitatively inconsistent with information costs that are linear in Shannon entropy, as is often assumed in applied work. Our data is best fit by a generalization of the Shannon model which allows for a more flexible response to incentives and for some states of the world to be harder to distinguish than others. Paper

“Subsidies, Information, and the Timing of Children's Health Care in Mali” (with Anja Sautmann and Samuel Brown) - Latest Version April 2019

We study how healthcare subsidies and improved information through healthworker visits affect the over- and underuse of primary care. In a randomized control trial of 1768 children in Mali we collect a unique panel of nine weeks of daily data and study the impact of each policy on demand conditional on need for care, as defined by WHO standards. Subsidies substantially increase medically needed care, while overuse remains rare. Information has no aggregate effect on demand, but reduces underuse for the youngest children while increasing it for the eldest, in line with a model of optimal care seeking. Paper

“Rationally Inattentive Behavior: Characterizing and Generalizing Shannon Entropy” (with Andrew Caplin and John Leahy) - Latest Version February 2019

We provide a full behavioral characterization of the standard Shannon model of rational inattention. The key axiom is “Invariance under Compression”, which identifies this model as capturing an ideal form of attention-constrained choice. We introduce tractable generalizations that allow for many of the known behavioral violations from this ideal, including asymmetries and complementarities in learning, context e¤ects, and low responsiveness to incentives. We provide an even more general method of recovering attention costs from behavioral data. The data set in which we characterize all behavioral patterns is “state dependent” stochastic choice data. Paper and (lengthy) technical appendix

“Econographics” (with Jonathan Chapman, Pietro Ortoleva, Erik Snowberg and Colin Camerer) - Latest Version January 2019

We study the pattern of correlations across a large number of behavioral regularities, with the goal of creating an empirical basis for more comprehensive theories of decision-making. We elicit 21 behaviors using an incentivized survey on a representative sample (n=1,000) of the U.S. population. Our data show a clear and relatively simple structure underlying the correlations between these measures. Using principal components analysis, we reduce the 21 variables to six components corresponding to clear clusters of high correlations. We examine the relationship between these components, cognitive ability, and demographics. Common extant theories explain some of the patterns in our data, but each theory we examine is also inconsistent with some patterns. Paper

“Impact of Health Worker Visits and Free Care on Health Outcomes and Behavior – a Randomized Controlled Trial in Bamako, Mali” (with Pierre Pratley, Anja Sautmann and Xinyi Zhang) - Latest Version January 2019

Two key policies for improving health outcomes in developing countries are the reduction of user fees and the engagement of community health workers. Yet questions remain about their effectiveness - in particular in combination. In a randomized controlled trial in Mali we examine their effect on a variety of child and household health outcomes and behaviors. Households that received both interventions experienced improvements in a number of health measures, while those that received health worker visits engaged in more preventative behavior. Our findings suggest that the two policies may be complementary in the way they address different healthcare needs. Paper

“Preference for Flexibility and Random Choice: an Experimental Analysis” (with John McNeill) - Latest Version December 2015

Agents may be uncertain about future preferences, leading to both a preference for flexibility in choice between menus and stochastic choice from menus. We describe experimental tests of the link between preference uncertainty and stochastic choice behavior in a real-effort task. We observe subjects' preferences over menus of work contracts and choice of effort from those contracts. We find that preference for flexibility is important: 61% of subjects exhibited a preference for flexibility when choosing between contracts to use at a future date. This demand for flexibility persists even when contract choices are implemented immediately after the contract choice is made, suggesting that the uncertainty motivating this demand may concern preferences rather than external factors. The choice of contracts is predictive of subsequent choice of effort level, suggesting that preference for flexibility is a rational response to uncertainty. Introducing a stochastic element to work contracts increased preference for flexibility, consistent with uncertainty playing a causal role in menu preferences. Paper

“The Behavioral Implications of Rational Inattention with Shannon Entropy” (with Andrew Caplin) - Latest Version August 2013

The model of rational inattention with Shannon mutual information costs is increasingly ubiquitous. We introduce a new solution method that lays bare the general behavioral properties of this model and liberates development of alternative models. We experimentally test a key behavioral property characterizing the elasticity of choice mistakes with respect to attentional incentives. We find that subjects are less responsive to such changes than the model implies. We introduce generalized entropy cost functions that better match this feature of the data and that retain key simplifying features of the Shannon model. Paper Supplemental Materiel

“Status Quo Bias in Large and Small Choice Sets” - Latest Version November 2008

This paper introduces models of status quo bias based on the concept of decision avoidance, by which a decision maker may select the status quo in order to avoid a difficult decision. These models capture the experimental finding that the status quo is more frequently chosen in larger choice sets. This phenomenon violates the predictions of current preference-based models of status quo bias that assume a decision maker with a fixed status quo will make consistent choices. Using laboratory experiments, I show that subjects in large choice sets do exhibit behavior in line with decision avoidance, while in small choice sets, preference-based models offer a better explanation of behavior. These findings raise questions for advocated policies of “benign paternalism.” Paper
Department of Economics

Columbia University, Rm 1031, International Affairs Bld, 420 W. 118th St., New York, NY, 10027, USA

mark.dean@columbia.edu

+1 212 854 3669