Stephanie SchmittGrohé and Martín Uribe are Professors of Economics at Duke University. Their main line of interest lies in monetary macroeconomics, in particular issues of optimal stabilisation policy. SchmittGrohé's
RePEc/IDEAS
entry. Uribe's RePEc/IDEAS entry.
Much of our recent research has been devoted to developing and applying
tools for the evaluation of macroeconomic stabilization policy. This
choice of topic was motivated by an important development in
businesscycle theory. By the late 1990s, a frictionless model of the
macroeconomy was viewed by many as no longer providing a satisfactory
account of aggregate fluctuations. As a response, the new Keynesian
paradigm emerged as an alternative framework for understanding business
cycles. A key difference between the neoclassical and the new Keynesian
paradigms is that in the latter, the presence of various nominal and real
distortions provide a meaningful role for stabilization policy, opening
the door once again, after decades of dormancy, for policy evaluation.
Developing Tools For Policy Evaluation
An obstacle we encountered early on in executing the research agenda described here was the lack of appropriate tools to evaluate stabilization policies in the context of distorted economies. An important part of our effort was therefore devoted to developing such tools.
Most models used in modern macroeconomics are too complex to allow for exact solutions. For this reason, researchers have appealed to numerical approximation techniques. One popular and widely used approximation technique is a firstorder perturbation method delivering a linear approximation to the policy function. One reason for the popularity of firstorder perturbation techniques is that they do not suffer from the `curse of dimensionality.' That is, problems with a large number of state variables can be handled without much computational demands. Because models that are successful in accounting for many aspects of observed business cycles are bound to be large (e.g., Smets and Wouters, 2004; and Christiano, Eichenbaum, and Evans, 2003), this advantage of perturbation techniques is of particular importance for policy evaluation. However, an important limitation of firstorder approximation techniques is that the solution displays the certainty equivalence property. In particular, the firstorder approximation to the unconditional means of endogenous variables coincides with their nonstochastic steady state values. This limitation restricts the range of questions that can be addressed in a meaningful way using firstorder perturbation techniques.
One such question that is of particular relevance for our research agenda is welfare evaluation in stochastic environments featuring distortions or market failures. For example, Kim and Kim (2003) show that in a simple twoagent economy, a welfare comparison based on an evaluation of the utility function using a linear approximation to the policy function may yield the spurious result that welfare is higher under autarky than under full risk sharing. The problem here is that some second and higherorder terms of the equilibrium welfare function are omitted while others are included. Consequently, the resulting criterion is inaccurate to order two or higher. The same problem arises under the common practice in macroeconomics of evaluating a secondorder approximation to the objective function using a firstorder approximation to the decision rules. For in this case, too, some secondorder terms of the equilibrium welfare function are ignored while others are not. See Woodford (2003, chapter 6) for a discussion of conditions under which it is correct up to second order to approximate the level of welfare using firstorder approximations to the policy function. In general, a correct secondorder approximation of the equilibrium welfare function requires a secondorder approximation to the policy function.
This is what we set out to accomplish in SchmittGrohé and Uribe (2004a). Building on previous work by Collard and Juillard, Sims, and Judd among others, we derive a secondorder approximation to the solution of a general class of discretetime rational expectations models. Specifically, our technique is applicable to nonlinear models whose equilibrium conditions can be written as: E_{t} f(y_{t+1},y_{t},x_{t+1},x_{t})=0, where the vector x_{t} is predetermined and the vector y_{t} is nonpredetermined.
The main theoretical contribution of SchmittGrohé and Uribe (2004a) is to show that for any model belonging to this general class, the coefficients on the terms linear and quadratic in the state vector in a secondorder expansion of the decision rule are independent of the volatility of the exogenous shocks. In other words, these coefficients must be the same in the stochastic and the deterministic versions of the model. Thus, up to second order, the presence of uncertainty affects only the constant term of the decision rules. But the fact that only the constant term is affected by the presence of uncertainty is by no means inconsequential. For it implies that up to second order the unconditional mean of endogenous variables can in general be significantly different from their nonstochastic steady state values. Thus, secondorder approximation methods can in principle capture important effects of uncertainty on average rate of return differentials across assets with different risk characteristics and on the average level of consumer welfare.
An additional advantage of higherorder perturbation methods is that like their firstorder counterparts, they do not suffer from the curse of dimensionality. This is because given the firstorder approximation to the policy function, finding the coefficients of a secondorder approximation simply entails solving a system of linear equations.
The main practical contribution of SchmittGrohé and Uribe (2004a) is the development of a set of MATLAB programs that compute the coefficients of the secondorder approximation to the solution to the general class of models described above. This computer code is publicly available at the authors' websites. Our computer code coexists with others that have been developed recently by Chris Sims and Fabrice Collard and Michel Juillard to accomplish the same task. We believe that the availability of this set of independently developed codes, which have been shown to deliver identical results for a number of example economies, helps build confidence across potential users of higherorder perturbation techniques.
Optimal Operational Monetary Policy for the U.S. Economy
After the completion of the secondorder approximation toolkit, we felt that we were suitably equipped to undertake a systematic and rigorous evaluation of stabilization policy. A contemporaneous development that highly facilitated our work was the emergence of estimated mediumscale dynamic general equilibrium models of the U.S. economy with the ability to explain the behavior of a relatively large number of macroeconomic variables at businesscycle frequency (e.g., Christiano, Eichenbaum, and Evans, 2003; and Smets and Wouters, 2004).
A central characteristic of the studies on optimal monetary policy that existed at the time we initiated our research on policy evaluation, was that they were conducted in the context of highly stylized environments. An important drawback of that approach is that highly simplified models are unlikely to provide a satisfactory account of cyclical movements for but a few macroeconomic variables of interest. For this reason, the usefulness of this strategy to produce policy advise for the real world is necessarily limited.
In a recent working paper (SchmittGrohé and Uribe, 2004b), we depart from the literature extant in that we conduct policy evaluation within the context of a rich theoretical framework capable of explaining observed business cycle fluctuations for a wide range of nominal and real variables.
Following the lead of Kimball (1995), the model emphasizes the importance of combining nominal and real rigidities in explaining the propagation of macroeconomic shocks. Specifically, the model features four nominal frictions, sticky prices, sticky wages, money in the utility function, and a cashinadvance constraint on the wage bill of firms, and four sources of real rigidities,
investment adjustment costs, variable capacity utilization, habit formation, and imperfect competition in product and factor markets. Aggregate fluctuations are assumed to be driven by supply shocks, which take the form of stochastic variations in total factor productivity, and demand shocks stemming from exogenous innovations to the level of government purchases.
Altig et al. (2003) and Christiano, Eichenbaum, and Evans (2003) argue that the model economy for which we seek to design optimal operational monetary policy
can indeed explain the observed responses of inflation, real wages, nominal interest rates, money growth, output, investment, consumption, labor productivity, and real profits to productivity and monetary shocks in the postwar United States. In this respect, SchmittGrohé and Uribe (2004b) aspires to be a step ahead in the research program of generating monetary policy evaluation that is of relevance for the actual practice of central banking.
In our quest for the optimal
monetary policy scheme we restrict attention to what we call operational interest rate rules. By an operational interestrate rule we mean an interestrate rule that satisfies three requirements. First, it prescribes that the nominal interest rate is set as a function of a few readily observable macroeconomic variables. In the tradition of Taylor (1993), we focus on rules whereby the nominal interest rate depends on measures of inflation, aggregate activity, and possibly its own lag. Second, the operational rule must induce an equilibrium satisfying the zero lower bound on nominal interest rates. And third, operational rules must render the rational expectations equilibrium unique. This last restriction closes the door to expectations driven aggregate fluctuations.
The object that monetary policy aims to maximize in our study is the expectation of lifetime utility of the representative household conditional on a particular initial state of the economy. Our focus on a conditional welfare measure represents a fundamental departure from most existing normative evaluations of monetary policy, which rank policies based upon unconditional expectations of utility. Exceptions are Kollmann (2003) and SchmittGrohé and Uribe (2004c). As Kim et al. (2003) point out, unconditional welfare measures ignore the welfare effects of transitioning from a particular initial state to the stochastic steady state induced by the policy under consideration. Indeed, we document that under plausible initial conditions, conditional welfare measures can result in different rankings of policies than the more commonly used unconditional measure. This finding highlights the fact that transitional dynamics matter for policy evaluation.
In our welfare evaluations, we depart from the widespread practice in the neoKeynesian literature on optimal monetary policy of limiting attention to models in which the nonstochastic steady state is undistorted. Most often, this approach involves assuming the existence of a battery of subsidies to production and employment aimed at eliminating the longrun distortions originating from monopolistic competition in factor and product markets. The efficiency of the deterministic steadystate allocation is assumed for purely computational reasons. For it allows the use of firstorder approximation techniques to evaluate welfare accurately up to second order, a simplification that was pioneered by Rotemberg and Woodford (1999). This practice has two potential shortcomings. First, the instruments necessary to bring about an undistorted steady state (e.g., labor and output subsidies financed by lumpsum taxation) are empirically uncompelling. Second, it is ex ante not clear whether a policy that is optimal for an economy with an efficient steady state will also be so for an economy where the instruments necessary to engineer the nondistorted steady state are unavailable. For these reasons, we refrain from making the efficientsteadystate assumption and instead work with a model whose steady state is distorted.
Departing from a model whose steady state is Pareto efficient has a number of important ramifications. One is that to obtain a secondorder accurate measure of welfare it no longer suffices to approximate the equilibrium of the model up to first order. Instead, we obtain a secondorder accurate approximation to welfare by solving the equilibrium of the model up to second order. Specifically, we use the methodology and computer code developed in SchmittGrohé and Uribe (2004a).
Our numerical work suggests that in the model economy we study, the optimal operational interestrate rule takes the form of a realinterestrate targeting rule. For it features an inflation coefficient close to unity, a mute response to output, no interestrate smoothing, and is forward looking. The optimal rule satisfies the Taylor principle because the inflation coefficient is greater than unity albeit very close to 1.
Optimal operational monetary policy calls for significant inflation volatility. This result stands in contrast to those obtained in the related literature.
The main element of the model driving the desirability of inflation volatility is indexation of nominal factor and product prices to 1period lagged inflation. Under the alternative assumption of indexation to longrun inflation, the conventional result of the optimality of inflation stability reemerges.
Open Questions
There remain many challenging unanswered questions in this research program. One is to investigate the sensitivity of the parameters of the optimal operational policy rule to changes in the sources of uncertainty driving business cycles. This question is of importance in light of the ongoing quest in businesscycle research to identify the salient sources of aggregate fluctuations. One alternative would be to incorporate the rich set of shocks identified in econometric estimations of the model considered here (e.g., Smets and Wouters, 2004).
The class of operational rules discussed here is clearly not exhaustive. It would be of interest to investigate whether the inclusion of macroeconomic indicators other than those considered here would improve the policymaker's ability to stabilize the economy.
In particular, the related literature has emphasized the use of measures of the output gap that are different from that used by us. Additionally, it has been argued that in models with nominal wage and price rigidities the optimal policy should target an average of wage and price inflation as opposed to only price inflation, which is the case we analyze.
The optimal policy problem we analyze takes the central bank's inflation target as exogenously given. A natural extension is to endogenize this variable. However, in our theoretical framework, the optimal inflation target is the one associated with the Friedman rule. This is because the assumption of full indexation to past inflation implies the absence of inefficient price and wage dispersion in the long run.
Thus the only remaining nominal frictions are the demand for money by households and firms. These frictions call for driving the opportunity cost of holding money to zero in the long run. In other words, the zero bound on nominal interest rate binds in the nonstochastic steady state. The perturbation technique we employ is ill suited to handle this case. Therefore, analyzing the case of an endogenous inflation target entails either changing the model so that the Friedman rule is no longer optimal in the longrun or adopting alternative numerical techniques for computing welfare accurately up to secondorder or higher.
One of our findings is that the initial state of the economy plays a role in determining the parameters defining the optimal interestrate rule. This finding suggests that the optimal operational rule identified here is time inconsistent. In SchmittGrohé and Uribe (2004b), we assume that the government is able to commit to the policy announcements made at time 0. It would be of interest to characterize optimal operational rules in an environment without commitment.
Finally, we limit attention to the special case of passive fiscal policy, taking the form of a balancedbudget rule with lumpsum taxation.
It is well known that the set of operational monetary rules depends on the stance of fiscal policy. For instance, the determinacy properties of the rational expectations equilibrium associated with a particular monetary rule can change as fiscal policy is altered.
Therefore, it would be of interest to introduce operational fiscal rules as an additional policy instrument.
References
Altig, David, Lawrence J. Christiano, Martin Eichenbaum, and Jesper Lindé (2003): Technology Shocks and Aggregate Fluctuations
manuscript, Northwestern University.
Christiano, Lawrence J., Martin Eichenbaum, and Charles Evans (2003): Nominal
Rigidities and the Dynamic Effects of a Shock to Monetary Policy. Northwestern University.
Kim, Jinill, and Sunghyun Henry Kim (2003):
Spurious Welfare Reversals in International
Business Cycle Models,
Journal of International Economics
vol. 60, pages 471500.
Kim, Jinill, Sunghyun Henry Kim, Ernst Schaumburg, and Christopher Sims (2003):
Calculating and Using Second Order Accurate Solutions of Discrete Time Dynamic Equilibrium Models, Finance and Economics Discussion Series 200361, Board of Governors of the Federal Reserve System.
Kimball, Miles S. (1995):
The Quantitative Analytics of the Basic Neomonetarist Model,
Journal of Money, Credit and Banking, vol. 27, pages
12411277.
Kollmann, Robert (2003):
Welfare Maximizing Fiscal and Monetary Policy Rules
mimeo, University of Bonn.
Rotemberg, Julio J., and Michael Woodford (1999):
Interest Rate Rules in an Estimated Sticky Price Model, in:
John B. Taylor, ed.,
Monetary policy rules
NBER Conference Report series. Chicago and London: University of Chicago Press, pages 57119.
SchmittGrohé, Stephanie and Martín Uribe (2004a):
Solving Dynamic General Equilibrium Models Using a SecondOrder Approximation to the Policy Function, Journal of Economic Dynamics and Control, vol. 28, pages 755775.
SchmittGrohé, Stephanie, and Martín Uribe (2004b):
Optimal Operational Monetary Policy in the ChristianoEichenbaumEvans Model of the U.S. Business Cycle, NBER working paper 10724.
SchmittGrohé, Stephanie and Martín Uribe (2004c):
Optimal Simple And Implementable Monetary and Fiscal Rules,
NBER working paper 10253.
Smets, Frank and Raf Wouters (2004):
Comparing shocks and frictions in US and Euro area business cycles: a Bayesian DSGE approach,
Working paper 61, Nationale Bank van Belgie.
Taylor, John B. (1993):
Discretion versus Policy Rules in Practice
Carnegie Rochester Conference Series on Public Policy, vol. 39,
pages 195214.
Thomas Holmes is Professor at the Department of Economics of the University
of Minnesota. He has recently
been working on the spatial distribution of economic activity as well as
basic issues in the organization of production. Holmes' RePEc/IDEAS entry.

EconomicDynamics: In your RED article, you demonstrate that if an industry is situated at an
inefficient location through accidents of history, it will eventually
migrate to efficient locations. Does this result apply to globalization,
which should thus be regarded as inevitable? And why is the modeling of
dynamics so important in this regard?

Thomas Holmes: I have to confess that my result in the RED article, "StepbyStep
Migrations" wouldn't usually apply to globalization. It is more about the
migration of industry within a country or even a region within a country.
But I am glad you asked the question, because it is a great one for
clarifying exactly what my result does cover. And it gives me a chance to
plug some related work!
A large literature, e.g. Paul Krugman, Brian Arthur, and others, has emphasized
that when agglomeration economies are important, industries can get "stuck"
in an inefficient location. No individual firm is unilaterally willing to
leave for the better location because of the "glue" of agglomeration
benefits. In other words, there is a coordination failure.
My model differs from the previous literature in that rather than being
forced to take a big discrete "jump" to go to some new location and forego
all agglomeration benefits, a firm can take a small "step" in the direction
of the new location. With a small step, the firm retains some of the
agglomeration benefits of the old location, but also begins to enjoy some of
the advantages of the new location. I show that industries never get stuck
at locations that are inefficient from the perspective of local optimality,
and I present a condition under which migration rates are efficient. As an
application of the theory, I discuss the migration of the automobile
industry in the U.S. from Michigan to the South. It is clear that this
industry has moved in a stepbystep fashion.
So we see that this specific model doesn't apply to globalization.
If the textile industry is in North Carolina, and the efficient location is
Africa, by taking a small step in this direction, firms would have to set up
shop in the Atlantic Ocean! Nevertheless, the broad idea of the project
that migrations can take place in a stepbystep fashion does apply to the
issue of globalization. In a 1999 Journal of Urban Economics article, I
observe that an industry like textiles has many different kinds of products.
Agglomeration benefits are not so important for the production of coarse
cloth, but they are important for advanced textiles. Lowend products tend
to migrate first, but these then set up a base of agglomeration benefits
that may draw in mediumlevel products, which in turn may drawn in the next
level products, and so on.
The larger point of this set of papers is that the attractive force
of production efficiency can be powerful even when agglomeration forces are
important. There will usually be somebody who will be drawn in by
production advantages of a better location and the first migration makes the
second one easier. The modeling of dynamics in this analysis is crucial,
since stepbystep migrations are inherently a dynamic phenomenon.

ED:
Modern Macroeconomics is based on strong microfoundations, yet one of its
essential components, the production function, is still a black box. Your
recent work with Matt Mitchell is looking at the interaction of skilled
work, unskilled work and capital. What should a macroeconomist used to a
CobbDouglas representative production function retain from this work?

TH: One goal of this project is precisely to get into the black box of
the production function. Important recent work, such as Krusell, Ohanian,
RiosRull, Violante, utilitizes capitalskill complementarity properties of
the production function to explain phenomena such as changes in the skill
premium. But there is little in the way of microfoundations of the
production function that delivers capitalskill complementarity.
My recent work with Matt Mitchell develops such a micro model of
production. The central idea of our model is that unskilled labor relates
to capital in the same way that skilled labor relates to unskilled.
Unskilled labor has general ability in the performance of mechanical tasks.
An unskilled worker can easily switch from the job of tightening a bolt,
picking up a paint brush, or emptying a trash. It may be possible to
substitute a machine to undertake any one of these tasks, but this would
generally require upfront investment, a fixed cost, to design a machine
specific to this task. In an analogous way, skilled labor has general
ability in the performance of mental tasks. A worker with a degree in
engineering can be put in charge of a production line and has general
knowledge to make appropriate decisions when unexpected problems arise.
Alternatively, a production process may be redesigned and routinized to
reduce the amount of uncertainty, making it possible for an unskilled worker
to run the production line, instead. In this analysis, the scale of
production is crucial for determining how to allocate tasks. For
smallscale production, e.g. for a prototype model, unskilled labor will do
the mechanical tasks and skilled labor will manage the production line. For
largescale production, capital will do the mechanical tasks and unskilled
labor will manage the production line.
We use our model to (1) provide micro foundations for capitalskill
complementarity, (2) provide a theory for how factor composition (e.g.
capital intensity and skilled worker share) varies with plant size and (3)
provide a theory of how expansion of markets through increased trade affects
the skill premium. Our theory is consistent with certain facts about factor
allocation and factor price changes in the 19th and 20th centuries.
Since you brought up Cobb Douglas, a good question is whether or not
our theory can provide micro foundations of CobbDouglas, analogous to
Houthakker. The answer is no. Not only is our aggregate production
function not CobbDouglas, it isn't even homothetic. In fact, one of the
key points of our paper is that a proportionate increase of all factors of
productionwhich is what happens when two similar countries begin to
tradecan change relative factor prices.
I know it's an uphill battle to wean macroeconomists off the
CobbDouglas production function. Macroeconomists love it not just because
of its tractability but also because of the constancy of capital share. But
if macroeconomists want to understand phenomena like changes in the skill
premium, I believe they have to get out of a CobbDouglas world. At this
point, my model with Matt is too primitive to make it suitable for a
quantitative macro analysis. But I believe that there is a potential for a
next generation of models in this class to be useful for quantitative
analysis.
References:
Holmes, Thomas J. (1999): How Industries Migrate When Agglomeration Economies Are
Important, Journal of Urban Economics, vol. 45, pages 240263.
Holmes, Thomas J. (2004): Stepbystep Migrations, Review of Economic Dynamics,
vol. 7, pages 5268.
Holmes, Thomas J. and Matthew F. Mitchell (2003): A Theory of Factor Allocation
and Plant Size, Federal Reserve
Bank of Minneapolis Staff Report 325.
H. S. Houthakker (1955): The Pareto Distribution and the CobbDouglas Production Function in Activity
Analysis, The Review of Economic Studies, vol. 23, pages 2731.
Krusell, Per, Lee Ohanian, JoséVíctor RíosRull, and Giovanni Violante
(2000): CapitalSkill Complementarity and Inequality: A Macroeconomic
Analysis, Econometrica, vol. 68, pages 102953.
Society for Economic
Dynamics: Letter from the President
Dear SED Members and Friends:
The 2004 meetings of the SED, held in Florence, Italy, were a great success. Ten and even eleven parallel sessions ran at a time for three days, with 370 papers presented. For the high quality of the content we should thank the program organizers Jeremy Greenwood and Gianluca Violante.
The 2005 meetings will be held in Budapest, June 2325. The call for papers is at http://www.economicdynamics.org/currentSED.htm and some of the planned events are already listed there. The program cochairs are Rob Shimer and Marco Bassetto, and the plenary speakers who this year will be Stephen Morris, Jonathan Eaton, and Richard Rogerson. The submission deadline is January 31.
Gary Hansen continues his dedicated and able management of RED. We are happy to report that RED is included in the 2003 edition of the Journal Citation Reports published by ISI. Although ISI has been indexing RED for several years now, this is the first year that statistics about RED have appeared in print.
I urge you to submit your best work to RED, and I hope to see all of you this June in Budapest.
Sincerely,
Boyan Jovanovic,
President
Society for Economic Dynamics
Finn Kydland and Edward Prescott win Nobel Prize
Finn Kydland and Edward Prescott are the 2004 recepient of the 2004 Nobel Prize in Economics, as all readers of this Newsletter certainly are aware of. They have been instrumental in the development of the Society, be it as members, as President (Prescott from 1992 to 1995), through involvement in the Review of Economics Dynamics or in the Society meetings, and by being advisors to many members of the Society for Economic Dynamics. The maximum number of degrees of separation between Finn or Ed and any member of the SED must be very low!
More generally, their work has been instrumental in reshaping macroeconomics into a modern, microfounded, datadriven field. It has immensely expanded what we can now do with theory, as it trigered a large development of new tools that help answer more and more important questions. Their impact has not just been academic, but has had concrete implications in the conduct of policy, in particular for central banks.
The Nobel Foundation has posted an interesting review of their work. Be sure to read it!
Society for Economic
Dynamics: Call for Papers, 2005 Meetings
The 2005 meetings of the Society for Economic Dynamics will be held 2325 June 2005 in Budapest (Hungary). The plenary speakers are Stephen Morris, Jonathan Eaton and Richard Rogerson. The program committee is composed of Robert Shimer and Marco Basetto, and Max Gillman and Akos Valentinyi are the local organizers.
The program will be made up from a selection of invited and submitted
papers. The Society welcomes submissions for the program. Submissions may
be from any area in Economics. The program committee will select the
papers for the conference. As well as considering individual papers for
the conference, the program committee will also entertain suggestions for
fourpaper sessions. The deadline for submissions is 31 January 2005.
Details are available at
http://www.EconomicDynamics.org/currentSED.htm.
Review:
Bagliano and Bertola's Models for Dynamic Macroeconomics
Models for Dynamic Macroeconomics
by Fabio Cesare Bagliano and Giuseppe Bertola
Yet another textbook on dynamic macroeconomics, one may think. This book is
different from the others in terms of the target audience. It aims at
advanced undergraduates, say in an Honors program or in programs that can go
more in depth in Economics. It is also suited for beginning graduates, say
those in terminal Master's programs for which the standard graduate textbook
may be overwhelming. It covers material that is treated in an undergraduate
textbook in a more technical fashion, but without drawing on tools that go
beyond undergraduate mathematics. This allows to concentrate on a rigorous
treatment of Economics without burdening students with many
new mathematical concepts.
The topics are standard for any good treatment of dynamic macroeconomics:
permanent income and consumption, precautionary savings, CCAPM, optimal
investment with adjustment costs, dynamics of the labor market, dynamic
general equilibrium, endogenous growth as well as search models. It also
includes exercices with answers.
While is does not cover some popular topics, like overlaping generations and
business cycle models, it provides the right amount of technical analysis
and economic intuition that is required of the target audience. This book is
an excellent addition to the toolbox used in teaching dynamic
macroeconomics.
Models for Dynamic Macroeconomics is published by Oxford University Press.
Impressum
The
EconomicDynamics Newsletter is published twice a
year
in April and November.
Subscribing/Unsubscribing/Address
change
To
subscribe to the EconomicDynamics mailing list,
send
a message to jiscmail@jiscmail.ac.uk with
the
following
message body:
join economicdynamics myfirstname mylastname
stop
To
unsubscribe to the EconomicDynamics mailing
list, send
a message to jiscmail@jiscmail.ac.uk with the
following
message body:
leave economicdynamics
stop
To
change a subscription address, please first
unsubscribe
and then subscribe. In case of problems, contact
economicdynamicsrequest@jiscmail.ac.uk.
The
EconomicDynamics mailing list has very low
traffic,
less that 8 messages a year. For weekly
announcements
about online papers in Dynamic General
Equilibrium
or relevant conferences, you may to subscribe to
nepdge,
in the same way as described above.