Mark Dean

“Rationally Inattentive Behavior: Characterizing and Generalizing Shannon Entropy” (with Andrew Caplin and John Leahy), Journal of Political Economy, Accepted

We introduce three new classes of attention cost functions: posterior separable, uniformly posterior separable and invariant posterior separable. As with the Shannon cost function, all can be solved using Lagrangean methods. Uniformly posterior separable cost functions capture many forms of sequential learning, hence play a key role in many applications. Invariant posterior separable cost functions make learning strategies depend exclusively on payo¤ uncertainty. We introduce two behavioral axioms, Locally Invariant Posteriors and Only Payo¤s Matter, which identify posterior separable functions respectively as uniformly and invariant posterior separable. In combination they pinpoint the Shannon cost function. Paper and technical appendix. Working paper version with additional results available here

“Credit Constraints and the Measurement of Time Preferences” (with Anja Sautmann), Review of Economics and Statistics, 2021, 103(1): 119–135.

Incentivized experiments are often used to identify the time preferences of households in developing countries. We argue theoretically and empirically that experimental measures may not identify preferences, but are a useful tool for understanding financial shocks and constraints. Using data from an experiment in Mali we find that subject responses vary with savings and financial shocks, meaning they provide information about credit constraints and can be used to test models of risk sharing. We use our model and data to determine that changes in consumption are driven by substantial unsmoothed ‘preference’ shocks, which are quantitatively important relative to income shocks. Paper and online appendix.

“The Empirical Relationship between Non-Standard Economic Behaviors” (with Pietro Ortoleva) - Proceedings of the National Academy of Sciences, 2019, 116(33): 16262-16267

We study the joint distribution of 11 behavioral phenomena in a group of 190 laboratory subjects and compare it to the predictions of existing models as a step in the development of a parsimonious, general model of economic choice. We find strong correlations between most measures of risk and time preference; between compound lottery and ambiguity aversion; and between loss aversion and the endowment effect. Our results support some, but not all attempts to unify behavioral economic phenomena. Overconfidence and gender are also predictive of some behavioral characteristics.

PNAS version “here”. Also available is an earlier version of the paper under the title: “Is it All Connected? A Testing Ground for Unified Theories of Behavioral Economics Phenomena”

“Rational Inattention, Optimal Consideration Sets and Stochastic Choice” (with Andrew Caplin and John Leahy) - Review of Economic Studies, May 2019, 86(3): 1061–1094

We unite two basic approaches to modelling limited attention in choice by showing that the rational inattention model implies the formation of consideration sets -- only a subset of the available alternatives will be considered for choice. We provide necessary and sufficient conditions for rationally inattentive behavior which allow the identification of consideration sets. In simple settings, chosen options are those that are best on a stand-alone basis. In richer settings, the consideration set can only be identified holistically. In addition to payoffs, prior beliefs impact consideration sets. Simple linear equations identify all priors consistent with each possible consideration set. Paper

“Limited Attention and Status Quo Bias” (with Özgür Kıbrıs and Yusufcan Masatlioglu), Journal of Economic Theory, May 2017, 169: 93-127

We introduce and axiomatically characterize a model of status quo bias in which the status quo affects choices by both changing preferences and focusing attention. The resulting Limited Attention Status Quo Bias model can explain both the findings that status quo bias is more prevalent in larger choice sets and that the introduction of a status quo can change choices between non-status quo alternatives. Existing models of status quo bias are inconsistent with the former finding while models of decision avoidance are inconsistent with the latter. We show that the interaction of the two effects has important economic implications, and report the results of laboratory experiments which show that both attention and preference channels are necessary to explain the impact of status quo on choice. Paper

“Satisficing and Stochastic Choice” (with Victor Aguiar and Maria Jose Boccardi), Journal of Economic Theory, November 2016, 166: 445-482

Satisficing is a hugely influential model of boundedly rational choice, yet it cannot be easily tested using standard choice data. We develop necessary and sufficient conditions for stochastic choice data to be consistent with satisficing, assuming that preferences are fixed, but search order may change randomly. The model predicts that stochastic choice can only occur amongst elements that are always chosen, while all other choices must be consistent with standard utility maximization. Adding the assumption that the probability distribution over search orders is the same for all choice sets makes the satisficing model a subset of the class of random utility models. Paper

“Allais, Ellsberg and Preferences for Hedging" (with Pietro Ortoleva), Theoretical Economics, January 2017), 12: 377–424

Two of the most well-known regularities of preferences under risk and uncertainty are ambiguity aversion and the Allais paradox. We study the behavior of an agent who can display both tendencies at the same time. We introduce a novel notion of preference for hedging that applies to both objective lotteries and uncertain acts. We show that this axiom, together with other standard ones, is equivalent to a representation in which the agent evaluates ambiguity using multiple priors, like in the model of Gilboa and Schmeidler [1989], but does not use Expected Utility to evaluate objective lotteries. Rather, lotteries are evaluated by distorting probabilities as in the Rank Dependent Utility model, but using the worst from a set of distortions. We show that a preference for hedging is not sufficient to guarantee an Ellsberg-like behavior if the agent violates Expected Utility for objective lotteries, and we provide a novel axiom that characterizes the special case of our representation that guarantees ambiguity aversion, linking the distortions for objective and subjective bets. Finally, we show that our representation is equivalent to one in which the agent treats objective lotteries as `ambiguous objects,' and uses a set of priors to evaluate them. Paper

“Measuring Rationality with the Minimum Cost of Revealed Preference Violations" (with Daniel Martin), Review of Economics and Statistics, July 2016, 98(3): 524-534

We introduce a new measure of how close a set of choices are to satisfying the observable implications of rational choice and apply it to a large balanced panel of household level consumption data. This new measure, the Minimum Cost Index, is the minimum cost of breaking all revealed preference cycles found in choices from budget sets. Using this measure we find that while observed violations of rationality are small in absolute terms, households are only moderately more rational than a benchmark of random choice. However, we find significant differences in the rationality of different demographic groups, with larger and older households closer to rationality. Surprisingly, households with more than one household head are also significantly more rational. In contrast to previous work, we document differences between demographic groups while controlling for predictive power. Paper Supplemental Materiel

“Revealed Preference, Rational Inattention, and Costly Information Acquisition” (with Andrew Caplin), American Economic Review, July 2015, 105 (7): 2183-2203

Apparently mistaken decisions are ubiquitous. To what extent does this reflect irrationality, as opposed to a rational trade off between the costs of information acquisition and the expected benefits of learning? We develop a revealed preference test that characterizes all patterns of choice "mistakes" consistent with a general model of optimal costly information acquisition and identify the extent to which information costs can be recovered from choice data. Paper Supplemental Materiel. (A previous version of the paper including some experimental results is available here.)

“Search and Satisficing” (with Andrew Caplin and Daniel Martin), American Economic Review, December 2011, 101 (7): 2899-2922

Many options are available even for everyday choices. In practice, most decisions are made without full examination of all such options, so that the best available option may be missed. We develop a search-theoretic choice experiment to study the impact of incomplete consideration on the quality of choices. We find that many decisions can be understood using the satisficing model of Simon [1955]: most subjects search sequentially, stopping when a “satisficing” level of reservation utility is realized. We find that reservation utilities and search order respond systematically to changes in the decision making environment. Paper

“Search, Choice and Revealed Preference (with Andrew Caplin), Theoretical Economics, January 2011, 6: 19-48

With complete information, choice of one option over another conveys preference. Yet when search is incomplete, this is not necessarily the case. It may instead reflect unawareness that a superior alternative was available. To separate these phenomena, we consider non-standard data on the evolution of provisional choices with contemplation time. We characterize precisely when the resulting data could have been generated by a general form of sequential search. We characterize also search that terminates based on a reservation utility stopping rule. We outline an experimental design that captures provisional choices in the pre-decision period. Paper

“Measuring Beliefs and Rewards: A Neuroeconomic Approach” (with Andrew Caplin, Paul Glimcher and
Robb Rutledge), Quarterly Journal of Economics, August 2010, 125(3): 923-960

The neurotransmitter dopamine is central to the emerging discipline of neuroeconomics; it is hypothesized to encode the difference between expected and realized rewards and thereby to mediate belief formation and choice. We develop the first formal test of this theory of dopaminergic function, based on a recent axiomatization by Caplin and Dean [2008A]. These tests are satisfied by neural activity in the nucleus accumbens, an area rich in dopamine receptors. We find evidence for separate positive and negative reward prediction error signals, suggesting that behavioral asymmetries in response to losses and gains may parallel asymmetries in nucleus accumbens activity. Paper

“Dopamine, Reward Prediction Error, and Economics” (with Andrew Caplin), Quarterly Journal of
Economics, May 2008 123(2): 663-701

The neurotransmitter dopamine has been found to play a crucial role in choice, learning, and belief formation. The best-developed current theory of dopaminergic function is the “reward prediction error” hypothesis—that dopamine encodes the difference between the experienced and predicted “reward” of an event. We provide axiomatic foundations for this hypothesis to help bridge the current conceptual gap between neuroscience and economics. Continued research in this area of overlap between social and natural science promises to overhaul our understanding of how beliefs and preferences are formed, how they evolve, and how they play out in the act of choice. Paper

“How can Neuroscience Inform Economics?” (with Ian Krajbich), Current Opinion in Behavioral Sciences, October 2015, Volume 4: 51-57

Neuroeconomics is now a well-established discipline at the intersection of neuroscience, psychology and economics, yet its influence on mainstream economics has been smaller than on the other two fields. This is in part because, unlike neuroscientists and psychologists, most economists are not interested in the process of decision making per se. We argue that neuroscience is most likely to influence economics in the short run by providing new insights into the relationships between variables that economists already study. In recent years the field has made many such contributions, using models from cognitive neuroscience to better explain choice behavior. Here we review this work that we think has great promise to contribute to economics in the near future. Paper

“A Game Theoretic Approach to Multimodal Communication” (with Alistair Wilson and James Higham), Behavioral Ecology and Sociobiology, September 2013, Volume 67(9): 1399-1415

Over the last few decades the animal communication community has become increasingly aware that much communication occurs using multiple signals in multiple modalities. The majority of this work has been empirical, with less theoretical work on the advantages conferred by such communication. In the present paper we ask: Why should animals communicate with multiple signals in multiple modalities? To tackle this question we use game theoretic techniques, and highlight developments in the economic signaling literature that might offer insight into biological problems. We start by establishing a signaling game, and investigate signal honesty under two prevailing paradigms of honest communication - costly signaling and cheap talk. In both paradigms, without further constraint, it is simple to show that anything that can be achieved with multiple signals can be achieved with one.We go on to investigate different sets of possible constraints that may make multiple signals and multimodal signals in particular more likely to evolve. We suggest that constraints on cost functions and bandwidths, orthogonal noise across modalities, strategically distinct modes, multiple qualities, multiple signalers, and multiple audiences, all provide biologically plausible scenarios that theoretically favor multiple and multimodal signaling. Paper

“Testing the Reward Prediction Error Hypothesis with an Axiomatic Model” (with Robb Rutledge, Andrew Caplin and Paul Glimcher), Journal of Neuroscience, October 2010, 30(40):13525-1353

Neuroimaging studies typically identify neural activity correlated with the predictions of highly parameterized models, like the many reward prediction error (RPE) models used to study reinforcement learning. Identified brain areas might encode RPEs or alternatively simply have activity correlated with RPE model predictions. Here we use an alternate axiomatic approach rooted in economic theory to formally test the entire class of RPE models on neural data. We show that measurements of neural activity from the striatum, medial prefrontal cortex, amygdala, and posterior cingulate cortex satisfy necessary and sufficient conditions for the entire class of RPE models. However, activity measured from the anterior insula falsifies the axiomatic model and therefore no RPE model can account for this activity. Further analysis suggests the anterior insula might instead encode something related to the salience of an outcome. As cognitive neuroscience matures and models proliferate, formal approaches that assess entire classes of models rather than specific model exemplars may take on increased significance. Paper

“Axiomatic Methods, Dopamine and Reward Prediction Error” (with Andrew Caplin), Current Opinion in
Neurobiology, August 2008, 18(2): 197-202

The phasic firing rate of midbrain dopamine neurons has been shown to respond both to the receipt of rewarding stimuli, and the degree to which such stimuli are anticipated by the recipient. This has led to the hypothesis that these neurons encode reward prediction error (RPE)—the difference between how rewarding an event is, and how rewarding it was expected to be. However, the RPE model is one of a number of competing explanations for dopamine activity that have proved hard to disentangle, mainly because they are couched in terms of latent, or unobservable, variables. This article describes techniques for dealing with latent variables common in economics and decision theory, and reviews work that uses these techniques to provide simple, non-parametric tests of the RPE hypothesis, allowing clear differentiation between competing explanations. Paper

“Trading off Speed and Accuracy in Rapid, Goal-Directed Movements” (with Shih-Wei Woo and Laurence
Maloney), Journal of Vision, July 2007, 7(5): 1-12

Many studies have shown that humans face a trade-off between the speed and accuracy with which they can make movements. In this article, we asked whether humans choose movement time to maximize expected gain by taking into account their own speed–accuracy trade-off (SAT). We studied this question within the context of a rapid pointing task in which subjects received a reward for hitting a target on a monitor. The experimental design we used had two parts. First, we estimated individual trade-offs by motivating subjects to perform the pointing task under four different time constraints. Second, we tested whether subjects selected movement time optimally in an environment where they were rewarded for both speed and accuracy; the value of the target decreased linearly over time to zero. We ran two conditions in which the subjects faced different decay rates. Overall, the performance of 13 out of 16 subjects was indistinguishable from optimal. We concluded that in planning movements, humans take into account their own SAT to maximize expected gain. Paper

“Enhanced Choice Experiments” (with Andrew Caplin), Chapter in The Method of Modern Experimental Economics, Guillaume Frechette and Andrew Schotter, eds, 2015

We outline experiments that improve our understanding of decision making by analyzing behavior in the period of contemplation that preceeds commitment to a …nal choice. The experiments are based on axiomatic models of the decision making process that relate closely to revealed preference logic. To test the models, we arti…cially incentivize particular choices to be made in the pre-decision period. We show how the resulting experiments can improve our understanding not only of the decision making process, but of the decision itself. Our broad method is to make aspects of search visible while retaining the disciplined approach to data that axiomatic modeling best provides. Paper

“What Can Neuroeconomics Tell Us About Economic Decisions (and Vice Versa)?”, Chapter in Comparative Decision Making, Philip Crowley and Thomas Zentall, eds, 2013

Neuroeconomics, or the combination of neuroscience data with economic questions and modeling techniques, has been around for almost 10 years, yet many economists remain sceptical of its value for informing models of economic decision making. This article attempts to define what it is neuroeconomists are trying to do, as well as the explicit criticisms that have been leveled at the project from mainstream economists. I conclude that there is no in principle reason why neuroscience cannot help inform economic modeling, particularly through `inspiration' for new models, and by allowing process models to be tested piece by piece, rather than all at once. However, the fact that we have relatively few examples of either suggests that the project is not an easy one. Paper

“Economic Insights from ‘Neuroeconomic’ Data” (with Andrew Caplin), American Economic Review Papers and Proceedings, May 2008, 98(2): 169-174

No Abstract Paper

“Axiomatic Neuroeconomics” (with Andrew Caplin), Chapter in Neuroeconomics: Decision Making and the Brain, Paul Glimcher, Colin Camerer, Ernst Fehr and Russell Poldrack, eds, 2008

No Abstract Paper

“The Neuroeconomic Theory of Learning” (with Andrew Caplin), American Economic Review Papers and Proceedings, May 2007, 97(2): 148-152

No Abstract Paper

“Why has World Trade Grown Faster than World GDP?” (with Maria Sebastia-Barriel), Bank of England Quarterly Bulletin, Autumn 2004: 310-320

Between 1980 and 2002, world trade has more than tripled while world output has "only" doubled. The rise in trade relative to output is common across countries and regions, although the relative growth in trade and output varies greatly. This article attempts to explain why the ratio of world trade to output has increased over recent decades. It provides a brief review of the key determinants of trade growth and identifies proxies that will enable us to quantify the relative importance of the different channels. We estimate this across a panel of ten developed countries. This will allow us to understand better the path of world trade and thus the demand for UK exports. Furthermore this approach will help us to distinguish between long-run trends in trade growth and cyclical movements around it. Paper

“Experimental Tests of Rational Inattention” (with Nathaniel Neligh) - Latest Version June 2022

We use laboratory experiments to test models of rational inattention, in which people acquire information to maximize utility net of information costs. We show that subjects adjust their attention in response to changes in incentives in line with the rational inattention model. However, our results are qualitatively inconsistent with information costs that are linear in Shannon entropy, as is often assumed in applied work. Our data is best fit by a generalization of the Shannon model which allows for a more flexible response to incentives and for some states of the world to be harder to distinguish than others. Paper

“Subsidies, Information, and the Timing of Children's Health Care in Mali” (with Anja Sautmann and Samuel Brown) - Latest Version February 2022

Sustained progress on child mortality requires better curative care. However, policy instruments intended to increase access to healthcare may only incompletely reduce underuse or create overuse. We conduct an RCT of 1,768 children in Mali that cross-randomizes subsidies and community healthworkers who visit families and monitor the child's health, and analyze how these interventions affect the targeting of acute care, which depends not just on overall demand, but on whether children receive care when actually sick. We collect nine weeks of daily symptom and health care data to measure demand conditional on need for care, as defined by WHO standards. Parents are over five times more likely to seek care when it is medically indicated, yet the probability of getting needed care remains below 5% in the control. Subsidies increase utilization by over 250%, significantly reducing underuse with moderate effects on overuse. Healthworker visits have no aggregate effects, but likely improve use of the subsidy for the youngest children. Paper

“The Effects of Community Health Worker Visits and Primary Care Subsidies on Health Behavior and Health Outcomes for Children in Urban Mali” (with Anja Sautmann) - Latest Version January 2022

Subsidized primary care and community health worker (CHW) visits are important demand side policies in the effort to achieve universal health care for children under five. Causal evidence on the effects of these policies, alone and in interaction, is still sparse. We report effects on diarrhea prevention, curative care, and incidence as well as anthropometrics for 1649 children from a randomized control trial in Bamako, Mali, that cross-randomized CHW visits and access to free health care. CHW visits improve prevention and subsidies increase the use of curative care for acute illness, with some indication of positive interaction effects. There is no evidence of moral hazard, such as reduced preventive care among families receiving the subsidy. Although there are no significant improvements in malnutrition, diarrhea incidence is reduced by over 70% in the group that receives both subsidies and CHW. Positive effects are concentrated among children ages 0 to 2. Paper

“On the Relation between Willingness to Accept and Willingness to Pay” (with Jonathan Chapman, Pietro Ortoleva, Erik Snowberg and Colin Camerer) - Latest Version May 2021

A vast literature documents that willingness to pay (WTP) is less than willingness to accept (WTA) a monetary amount for an object, a phenomenon called the endowment effect. Using data from three incentivized studies with a total representative sample of 4,000 U.S. adults, we add one additional finding: WTA and WTP for a lottery are (essentially) uncorrelated. In contrast, independent measures of WTA (or WTP) are highly correlated, and relatively stable across time. Leading models of reference-dependent preferences are compatible with a zero correlation between WTA and WTP, but only for specific parameterizations and ruling out popular special cases. These models also predict a relationship between the endowment effect and loss aversion, which we do not find. Paper

“Econographics” (with Jonathan Chapman, Pietro Ortoleva, Erik Snowberg and Colin Camerer) - Latest Version November 2020

We study the pattern of correlations across a large number of behavioral regularities, with the goal of creating an empirical basis for more comprehensive theories of decisionmaking. We elicit 21 behaviors using an incentivized survey on a representative sample (n = 1;000) of the U.S. population. Our data show a clear and relatively simple structure underlying the correlations between these measures. Using principal components analysis, we reduce the 21 variables to six components corresponding to clear clusters of high correlations. We examine the relationship between these components, ability, and demographics. Common extant theories explain some of the patterns in our data, but each theory we examine is also inconsistent with some patterns as well. Paper

“Preference for Flexibility and Random Choice: an Experimental Analysis” (with John McNeill) - Latest Version October 2020

Agents may be uncertain about future preferences, leading to a preference for flexibility in menu choice and stochastic choice from menus. Such uncertainty may be important for contract design, and may offset preference for commitment arising from temptation. We experimentally measure choice between and from menus in a real-effort task. We find a preference for flexibility in 61% of subjects. Demand for flexibility persists even when contract choices are implemented immediately after the contract choice is made. The choice of contracts is predictive of subsequent choice of effort level, suggesting that preference for flexibility is a rational response to uncertainty. Paper

“The Behavioral Implications of Rational Inattention with Shannon Entropy” (with Andrew Caplin) - Latest Version August 2013

The model of rational inattention with Shannon mutual information costs is increasingly ubiquitous. We introduce a new solution method that lays bare the general behavioral properties of this model and liberates development of alternative models. We experimentally test a key behavioral property characterizing the elasticity of choice mistakes with respect to attentional incentives. We find that subjects are less responsive to such changes than the model implies. We introduce generalized entropy cost functions that better match this feature of the data and that retain key simplifying features of the Shannon model. Paper Supplemental Materiel

“Status Quo Bias in Large and Small Choice Sets” - Latest Version November 2008

This paper introduces models of status quo bias based on the concept of decision avoidance, by which a decision maker may select the status quo in order to avoid a difficult decision. These models capture the experimental finding that the status quo is more frequently chosen in larger choice sets. This phenomenon violates the predictions of current preference-based models of status quo bias that assume a decision maker with a fixed status quo will make consistent choices. Using laboratory experiments, I show that subjects in large choice sets do exhibit behavior in line with decision avoidance, while in small choice sets, preference-based models offer a better explanation of behavior. These findings raise questions for advocated policies of “benign paternalism.” Paper
Department of Economics

Columbia University, Rm 1031, International Affairs Bld, 420 W. 118th St., New York, NY, 10027, USA

+1 212 854 3669