\documentclass[a4paper]{article}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{amsmath,amsthm}
\usepackage{graphicx}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage{amssymb}
\usepackage{enumerate}
\usepackage{mathtools}
\usepackage{natbib}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage[english]{babel}
\usepackage[autostyle]{csquotes}
% \usepackage[backend=biber,style=authoryear]{biblatex}
% \addbibresource{jmp.bib}
\usepackage[normalem]{ulem}
\title{JMP: A New Keynesian Model Useful for Policy}
\author{David Staines}
\date{\today}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{assumption}{Assumption}
\newtheorem{remark}{Remark}
\newtheorem{condition}{Condition}
\newtheorem{proposition}{Proposition}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\begin{document}
\maketitle
\begin{abstract}
I add price-dispersion to a benchmark zero-inflation steady-state New Keynesian model. I do so by assuming the economy has experienced a history of shocks, which have caused the Central Bank to miss its target for inflation and output, as opposed to the conventional practice of linearizing around a non-stochastic steady state. I then allow the inflation targeting Central Bank to optimize policy. The results are truly starting.\par The model simultaneously embeds endogenous inflation and interest rate persistence in an institutionally-consistent optimizing framework. This creates a meaningful trade-off between inflation and output-gap stabilization following demand and technology shocks. This resolves the so-called 'Divine Coincidence', explains the preference for 'coarse-tuning' over 'fine-tuning' and the focus in policy circles on inflation forecast targeting. When estimated the model performs well against a battery of demanding econometric tests. \par Along the way, a novel econometric test of the 'Divine Coincidence' is developed- it is rejected in favor of a substantial trade-off. A welfare equivalence is derived between a class of New Keynesian models and their flexible price counterparts suggesting previous proposed resolutions may be inadequate. Finally, a novel paradox relating the 'Divine Coincidence' to 'fine-tuning' stabilization policy is derived.\end{abstract}
\section{Introduction}
The macroeconomic profession has broadly settled on a \emph{New Neo-Classical Synthesis}\cite{goodfriend1997new}. This approach adds staggered price adjustment to the skeleton of the Real Business Cycle model to generate micro-founded compliments to the aggregate demand and aggregate supply equations of Old Keynesian economics. The Euler equation specifying optimal consumption and the optimal price-setting relation the New Keynesian Phillips curve (NKPC) simplify to their Old Keynesian counterparts when non-contemporaneous exogenous variables are held fixed. When this constraint is relaxed, it is necessary to specify an interest rate rule consistent with the 'Taylor Principle' to generate stable solution paths. \footnote{The 'Taylor Principle' states that the real interest rate should be expected to increase in response to higher inflation in order to drive it back to target following a shock to expectations to prevent sunspot equilibria see \cite{woodford2001taylor}. The term was coined in respect of the seminal work on monetary policy rules\cite{taylor1993discretion}.Simultaneously,\cite{henderson1993comparison} proposed a similar rule.} Thus a three equation set up emerges.
The combination of rational expectations with monetary non-neutrality allows one to analyze systematic monetary policy seemingly unhindered by the Lucas Critique\cite{lucas1976econometric}. For this reason, the three equation New Keynesian framework underpins the field of Dynamic Stochastic Equilibrium (DSGE) modeling, now popular in academia and with Central Banks.
\par Nevertheless, New Keynesianism has had problems distinguishing itself from the Neo-Classical tradition in terms of policy prescriptions and model predictions. Macroeconomics as a separate intellectual discipline came into existence following \emph{The General Theory}\cite{keynes1936general} in order to learn how best to mitigate inefficient business cycle fluctuations. The lack of inefficient fluctuations in benchmark Real Business Cycle models, such as \cite{long1983real}and \cite{barro1984time},is the most unappealing aspect of Neo-Classical modeling and the reason why it has never enjoyed favor in policy circles.
\par Unfortunately, comparable New Keynesian models also suffer from this problem.\cite{correia2008optimal} show that the Central Bank can implement the social optimum when the government is using a standard set of distortionary tax instruments to correct static market failures. \cite{woodford2000neo}\cite{michael2002inflation} show an optimal monetary policy can successfully stabilize inflation and the output gap simultaneously under a wide variety of shock processes. This result has been labeled the 'Divine Coincidence'\cite{blanchard2007real}(henceforth DC).\footnote{The possibility of a binding zero bound on nominal interest rate overturns this
conclusion, in particular it is optimal for the Central Bank to allow a period of above target inflation and output immediately following a zero bound spell \cite{eggertsson2003zero}.However, the problem may reoccur if we adopt the more empirically credible assumption that Central Banks can mimic negative nominal interest rates through quantitative easing.}It is an anathema in policy circles.'Inflation nutters' is the uncharitable description former Bank of England Governor Mervyn King gave to those advocating complete inflation stabilization as a policy objective, as if the DEC applied\cite{king1997changes}.
\par This reflects a fundamental disjuncture between optimal monetary policy in theory and successful policy practice. I show that in the stochastic compliment to the DC framework deviations of inflation and the output gap from target should be white noise. Therefore in the limit as Central Banks become better able to observe shocks in real time and change policy rates more frequently the Central Bank can 'fine-tune'\footnote{The term is frequently attributed to Walter Heller Chief Economic Adviser to President Kennedy see for example http://connection.ebscohost.com/c/reference-entries/40422478/fine-tuning-1960s-economics. It referred originally to fiscal policy in an 'Old Keynesian' set up. Scepticism about the concept was focal to monetarist opposition to traditional Keynesian macroeconomics\cite{snowdon2005modern} see for example \cite{friedmanrole}.}away all fluctuations in inflation and the output gap. \par Secondly, there is an 'inflation persistence puzzle'. In the data inflation appears persistent across all nations, time periods, policy regimes, levels of aggregation and plausible assumptions about trends in other macroeconomic variables \cite{o2005has}\cite{pivetta2007persistence}\cite{gerlach2012inflation}\cite{imbs2011sectoral} \cite{meller2012inflation}\cite{kouretas2012dynamics}\cite{plakandaras2014us}\cite{vaona2012regional}\cite{tillmann2013inflation}\cite{choi2013heterogeneous}\cite{nakamura2013price}.\footnote{It is worth noting that many studies are able to reject the null of no persistence in inflation even when they have sufficient power to uncover statistically significant changes in persistence across policy regimes. Although, there is considerable heterogeniety in inflation persistence across sectors macroeconomic persistence is not a figment of aggregation bias.} The forward-looking NKPC is strongly rejected in favor of a hybrid specification containing lagged as well as future inflation, a result that has not been adequately explained in a consistent theoretical fashion see \cite{roberts1997inflation}\cite{gali1999inflation} \cite{rudd2005new}\cite{fuhrer2006intrinsic}\cite{whelan2007staggered}\cite{rumler2007estimates}amongst a voluminous literature.
In the DC model inflation has no persistence. Worse still when DC is relaxed by allowing persistent distortionary shocks\footnote{Distortionary shocks effect the wedge between actual and efficient output. They enter into the NKPC where they are often called 'cost-push' shocks their interpretation and misinterpretation are discussed in Section 4. }, optimal policy amounts to a form of price-level targeting\cite{woodford2010optimal}. Therefore, inflation inherits \emph{negative persistence}. \par This contrasts with best practice among inflation targeting Central Banks who practice so called 'coarse-tuning'\cite{lindbeck1992macroeconomic}. They realize that inflation possess intrinsic persistent so that they cannot hit inflation and output targets in every period. Instead they practice so called \emph{inflation forecast targeting}\cite{kohn2009policy}\cite{svensson2010inflation}\cite{svensson2012evaluating}. This is where policy and projections for future policy are adjusted to yield a desirable expected path for inflation and real activity consistent with \emph{medium term} stability. Usually defined as forecast inflation and output gap sufficiently close to target after a time frame of 18 months to 3 years\footnote{Studies with VARs and policymakers wisdom suggest that it takes between 18 months and two years for a change in monetary policy to have its maximum impact on inflation. This result seems to be robust across changes in policy regimes; \cite{orlowski2000ben}(see p 315-320)\cite{batini2001lag}\cite{gerlach2003money}.
On its website the Bank of England advises the general public that: "Monetary policy operates with a time lag of about two years." http://www.bankofengland.co.uk/monetarypolicy/Pages/overview.aspx However the Bank publishes forecasts three years ahead and frequently talks about "inflation returning to target by the three year horizon" consistent with a longer view of the stabilization and empirical work by \cite{havranek2013} period.http://www.bankofengland.co.uk/publications/Pages/inflationreport/infrep.aspx Practices are similar at other leading inflation targeting Central Banks. }.
\par The three equation framework was designed explicitly to address issues with optimal monetary policy and its effects upon inflation and real activity. New Keynesian theory is failing the test of policy relevance.\cite{chari2009new} were right New Keynesian models are not yet fit for purpose.
\par A second challenge confronting New Keynesian modeling is the policy persistence puzzle. Interest rates are highly persistent- much more so than underlying shock processes. This means that estimated Taylor rules require coefficients on lagged rates near unity to purge serial correlation and represent an optimizing relationship\cite{coibion2012target}\cite{vazquez2013informational}\footnote{The papers control respectively for non-rational expectations on the part of the policymaker and data revisions. See also\cite{rudebusch2002term}\cite{petra2004interest}\cite{rudebusch2006monetary}\cite{carrillo2007monetary}\cite{conraria2014estimating}. The acclaimed Norges Bank (the Central Bank of Norway) chooses to insert a substantial interest rate stabilization term- ad hoc with respect to its policy mandate- into its loss function used to derive optimal policy- in order to generate policy rate predictions consistent with credible application of inflation forecast targeting\cite{bergo2007interest}\cite{holmsen2007implementing}. Other Central Banks swerve around this problem by using subjective judgments or information from futures rates or the yield curve which have expectations of policy persistence built-in \cite{ang2007no} \cite{hamilton2011estimating}. } Attempts to explain why this might be optimal have so far proven unconvincing. It is widely suspected, the interest rate and inflation persistence puzzles are related. Several attempts at a simultaneous resolution have been made.\cite{cogley2008trend} and \cite{cogley2010inflation} have attempted a simultaneous resolution by incorporating shocks to the trend rate of inflation. My analysis most closely follows \cite{alves2014lack} who uses trend inflation to generate inflation persistence and resolve the DC. I show my resolution is not only more parsimonious but more empirically credible than adding shocks to trend inflation.
\par I am able to derive a resolution to the inflation, persistence and optimal policy paradoxes simultaneously.I show that trend inflation is not necessary to resolve the DC all that is required is to appreciate that a history of shocks to the economy will generate price dispersion and therefore the New Keynesian model should be linearized around a stochastic steady state with price dispersion rather than the non-stochastic case with zero price dispersion as is currently ubiquitous\footnote{This approach is similar to that of \cite{juillard2005solving} \cite{michel2011local}{coeurdacier2011risky} although their focus on deriving risk premium causes them to consider perturbations of order two and higher which I do not need.} This allows New Keynesianism to fully differentiate its welfare implications from those of Real Business cycle- verifying the suspicion long held by policymakers and amongst academics that provided there is some nominal rigidity inefficient business cycle fluctuations will take place, that cannot be fully mitigated by Central Bankers, such that substantive trade-offs exist in monetary policymaking. Price dispersion is the internal combustion engine driving the New Keynesian car. \par I discuss advantages of my resolutions to these paradoxes over the trend inflation approach and others in the literature. The result is what I believe to be a New Keynesian model fit for purpose.
The paper proceeds as follows section 2 derives the simplest New Keynesian model focusing on its non-stochastic zero inflation steady state. Section 3 considers optimal policy and formalizes the Divine Coincidence (DC). This includes a cross-country econometric investigation. Section 4 considers optimal policy and persistence problems. Section 5 is the major contribution of the paper. It is where I derive optimal policy under inflation targeting with price dispersion. Section 6 covers simulation and estimation of the complete model. Section 7 augments the basic model with nominal wage rigidity. Section 8 derives paradoxes related to 'fine-tuning' and welfare equivalence for a class of New Keynesian models with real distortions. Finally, Section 9 draws conclusions and suggests directions for future research.
\section{New Keynesian Model}
This section exposits the benchmark New Keynesian model which forms the basis for subsequent analysis with emphasis on aspects pertinent for forthcoming results.
\subsection{Household's Problem}
\par There is a representative household which solves the following problem:
\begin{equation}\max_{C_{t}, l_{t}} \: {E_{t}\sum_{T=t}^{\infty}\beta^{T-t}[u(C_{T})-\varphi_{T}\nu(l_{T})]\psi_{T}}\end{equation}
subject to the Budget Constraint:
\begin{equation}P_{t}C_{t}+B_{t}=(1+i_{t-1})B_{t}+P_{t}W_{t}l_{t}+\int_{i}{\Pi_{t}(i)}{d}i\end{equation}
$\beta$ is the discount factor, $C$ refers to aggregate consumption whilst $l$ is labor supply.\:$B$ refers to the holding of one period risk free nominal bonds.\:$i_{t}$ is the risk-free nominal interest rate paid at the end of period $t$ on the bond.\:$P$ is the price level- bonds are the numeraire here.\:$W$ is the real wage. Finally $\Pi_{i}$ is profit from an individual firm $i$. \par Note in a stochastic environment firms need not make the same profits even with a symmetric equilibrium. Since when price rigidity is introduced firms with the same demand curve with the same demand curve will charge different prices depending on when they last reoptimized and can therefore make different levels of profit. $\psi_{T}$ and $\varphi_{T}$ are disturbance terms. The former affects intertemporal consumption preferences the latter the willingness to supply labor. \emph{In section 3} more sophisticated interpretations for these stochastic terms will be discussed. The budget constraint states that the uses for nominal income consumption and saving must be equal to the sources of income wealth, labor and dividend income.
\par Consumption is desirable but working is undesirable so $u$ and $\nu$ are increasing. $u$ is concave to incentivize consumption smoothing whilst $\nu$ is convex to encourage workers to take leisure. An Inada condition on $u$, a restriction on the process governing $\psi$, a zero initial wealth condition and a transversality condition
serve to ensure an interior solution. They are as follows:
\begin{equation}\lim_{c\to{0}^{+}} u'(c)=\infty\end{equation}
\begin{equation}B_{0}=0\end{equation}
\begin{equation}E_{t}\int_{S}\sum_{T=t}^{\infty}\beta^{T-t}[u(C_{T})-\varphi_{T}\nu(l_{T})]\psi_{T}< \infty \end{equation}
for $ a.e. \: S\in S'$
Where $S$ is any member of $S'$ the $\sigma$-algebra generated by the stochastic process $F$, which assigns probability measure to the individual shock processes $\psi$ and $\phi$ although we could imagine the model being augmented with further disturbance processes, reflecting additional sources of macroeconomic variability.
\begin{equation} \lim_{T\to \infty} E_{t}\frac{B_{T}}{P_{T}}u_{C}(C_{T})\geq 0 \end{equation}
Equation (3) is a "no-starvation" condition, it ensures the agent will always choose to consume even though working is costly. Equation (4) stops the agent living off their savings and allows me to avoid articulating a specific government budget constraint. Together (3) and (4) ensure an interior labor supply. Equation (5) ensures local uniqueness by stopping the objective function exploding. Finally, equation (6) is a "No-Ponzi" condition it forces the agent to honor the present
value of their debts. Due to monotonicity of the utility function the constraint will always bind with equality and if it were left out the agent would simply borrow an infinite amount and never repay.
\par The first order conditions for the households are as follows:
\begin{equation}u_{c}(C_{t})=(1+i_{t})\beta E_{t}U_{c}(C_{t+1})\frac{\psi_{t+1}}{\psi_{t}}\frac{P_{t}}{P_{t+1}}\end{equation}
\begin{equation}u_{c}(C_{t})W_{t}=\phi_{t}\nu_{l}(l_{t})\end{equation}
Equation (7) the so-called Euler equation specifies the path for optimal consumption, whilst equation (8) which equates the marginal costs and benefits of working, yields the labor supply curve. When the market is approximated the following parameters of the utility function will be used $\sigma=\frac{-Cu_{cc}}{u_{c}}$ and $\eta=\frac{l\nu_{ll}}{\nu_{l}}$ both are strictly positive to ensure an interior solution. They measure respectively the concavity of consumption utility and the convexity of the disutility from work. $\sigma$ is the inverse of the inter-temporal elasticity of substitution- the consumer's willingness to shift consumption across time periods, similarly $\eta$ is inversely related inter-temporal elasticity of labor supply.\footnote{ $\sigma$ is also the coefficient of relative risk aversion but this interpretation is not relevant here because I only consider first order approximations where certainty equivalence holds.}
\par I now specify that the measure of the firms is a unit continuum. The firms are monopolistically competitive. This ensures that firms face a meaningful pricing decision, avoiding the case of unbounded sales possible under perfect or simple Bertrand competition. Aggregate consumption by the household can now be described as an constant elasticity aggregator over the consumption of each variety of good\footnote{See \cite{armington1969theory} for an early application this functional form was popularized by \cite{dixit1977monopolistic} and is sometimes called after them.} $C_{t}=[\int_0^1 {c_{t}(i)}^{1-\theta}{d}i]^\frac{1}{1-\theta}$ where $\theta > 1$ is the elasticity of substitution among varieties. The optimal consumption allocation across varieties yields the demand system $c_{t}(i)=(\frac{p_{t}(i)}{P_{t}})^{-\theta} C_{t}$. Combining the two yields the expression for the aggregate price level in terms of all the prices in the economy $P_{t}=[\int_0^1 {p_{t}(i)}^{1-\theta}{d}i]^\frac{1}{1-\theta}$. Note that all production is consumed so $c_{t}(i)=y_{t}(i)$ and \begin{equation}C_{t}=Y_{t}\end{equation}
where the subscript $i$ refers to an individual firm $Y_{t}$ refers to aggregate output.
\par To introduce nominal rigidity I adopt the Calvo model\cite{calvo1983staggered}\cite{yun1996nominal} as it is the most popular in the literature.Although, major theoretical results about the behavior of price dispersion and welfare implications generalize to alternative models of nominal rigidity- a point demonstrated in Section 3 and Appendix A. In the Calvo model when a firm selects a price it stays constant for a stochastic number of periods, the probability that the firm is offered an opportunity to change its price is the same each period and across firms. It is equal to $1-\alpha$, hence $\alpha$ the fraction of prices that automatically stay fixed in a given period is a measure of the degree of nominal rigidity. Firms are assumed to maximize profits. Their optimal pricing problems are symmetric and I focus exclusively on symmetric equilibrium. Therefore, all firms who reset their price in any given period will select the same price. The measure used to compute the aggregate price and consumption will be discrete with mass points at every previous optimal reset price. Each firms price will be the optimal price when they were last able to reset it. The current reset price will be denoted $p_{t}^*$.\footnote{There are several alternative approaches to building nominal rigidity into the pricing problem. The next most similar is Taylor contracting model see \cite{taylor1979staggered} where firms change their price every $\frac{1}{1-\alpha}$ periods. The model has the same log-linear form. State-Dependent pricing models where firms have the opportunity for a costly price-change every period, at least in their benchmark forms struggle to generate aggregate nominal rigidity \cite{caplin1987menu}\cite{golosov2007menu}\cite{head2012sticky}. Price and wage rigidity have been extensively documented across countries with different structural characteristics and policy environments\cite{dhyne2006price}\cite{dickens2007wages}\cite{babecky2010downward}. There has been considerable success in modeling how the heterogeneous pricing behavior of individual firms might give rise to aggregate monetary non-neutrality \cite{guimaraes2011sales}\cite{dixon2012generalised}\cite{anderson2013informational} \cite{kehoe2014prices}. I have abstracted from wage rigidity initially for simplicity because the labor market is not my primary focus.}
\subsection{Firms' Problem} Firms maximize profits in the choice of factors and also in prices when they are given the chance to reoptimize. In order to profit maximize they must cost minimize.
\subsubsection{Cost Minimization} I make two simplifications here first that labor is the only factor of production, second that there are constant returns to scale. Appendix E ???? relax these assumptions, in fact allowing for decreasing returns to labor actually strengthens the main mechanism in this paper. The problem is therefore as follows:
\begin{equation}\min_{l_{t}(i)}\: W_{t}l_{t}(i) \end{equation}
subject to
\begin{equation}c_{t}(i)=A_{t}l_{t}(i)\end{equation}
Hence we can solve for real marginal cost
\begin{equation}\varphi_{t} = \frac{W_{t}}{A_{t}}\end{equation}
$\varphi$ the real marginal cost is equal to the ratio of the real wage to aggregate technical efficiency term $A_{t}$, from the RBC model. More detailed interpretations will be offered in Section 4. Firm level productivity shocks either do not exist or have been averaged out by a law of large numbers. Note that firms will have identical marginal costs here.This is because they face the same input prices and there are constant returns to scale. It simplifies my analysis but both assumptions are relaxed in Appendix items to allow for capital formation and nominal wage rigidity.
\subsubsection{Profit Maximization}
The firm maximizes the expected present value of profits as follows:
\begin{equation}\max_{p_{t}(i)}E_{t}\sum_{T=t}^{\infty}(1-\alpha)^{T-t}Q_{t,T}[\frac{p_{t}(i)}{P_{T}}y_{T}-\beth_{T} y_{T}] \end{equation}
s.t. demand and market clearing constraints:
\begin{equation}y_{t}(i)=(\frac {p_{t}(i)}{P_{t}})^{-\theta}C_{t} \end{equation}
Here $Q_{t,t+k}=\beta ^k \frac{U_{C}(C_{t+k})}{U_{C}(C_{t+k})}$ represents the real stochastic discount factor (SDF), it is the risk-adjusted present value of future consumption $k$ periods ahead.
Optimal pricing gives the rest price $p_{t}^*$ as a markup $\frac{\theta}{\theta-1}$ over expected future marginal costs. The markup reflects the degree of imperfect competition which is declining in $\theta$ the degree of substitution among varieties. The limiting case $\theta \to \infty$ returns perfect competition. The problem is infinite horizon, the discounting reflects both $Q$ the real SDF and $\alpha$ governing the life-expectancy of the price. In real terms we have:
\begin{equation}
\sum_{T=t}^{\infty} \Phi_{T}[\frac{p_{t}^{*}}{P_{T}}-\frac{\theta}{\theta-1}\beth_{T}]=0
\end{equation}
Where the discount weight $\Phi_{T}=Q_{T}y_{T}$ reflects the SDF and the scale of output at time $T$ produced by the firm given that it had last set its price in period $T$.
Under Calvo pricing the price level evolves as follows:
\begin{equation}P_{t}^{1-\theta}=\alpha P_{t-1}^{1-\theta}+(1-\alpha)(p_{t}^{*})^{1-\theta}\end{equation}
The persistence of the price level depends on $\alpha$ the degree of rigidity.
\subsection{Policy Rule}
Finally, it is necessary to have a policy rule to close the model. In practice none of the models used in empirical simulations can be viewed as optimizing a suitable loss function. The most well-known is the so-called Taylor rule\footnote{The alternative is to specify a money supply rule. The most popular money supply rule was the Mccallum rule \cite{mccallum1988robustness}. The McCallum rule is essentially a flexible version of the monetary targeting rules advocated by monetarist like Milton Friedman\cite{friedman1960program} and applied by Western policymakers as part of disinflationary efforts during the late 1970s and early 1980s. In Britain and the United States monetary targeting was eventually abandoned owing to perceived instability in the money demand function impaired implementation. It now seems that this appearance was a figment of a miss-measuring the opportunity cost associated with the new money substitutes on offer as a consequence the financial deregulation occurring around that time- \cite{ericsson1998demand}\cite{ireland2009welfare}\cite{barnett2012getting}\cite{ball2012short}\cite{lucas2015stability}. In the stylized models exposited here money demand is inconsequential to policy because of the presence of frictionless financial markets as shown by \cite{woodford1998doing}. The empirical application of the result is doubtful however, as it appears empirical specifications including money perform better than those with just interest rates\cite{belongia2014interest}. This seems to be particularly important in periods of very low interest rates- where the interest rate channel breaks down but policy still appears be effective \cite{ueda2012japan}\cite{kapetanios2012assessing}\cite{d2012federal}\cite{swanson2014measuring}\cite{gilchrist2015monetary}. The problem appears to be the restriction on the class of financial market inefficiency imposed in benchmark models. It has been suggested that these distortions may have substantial effect on monetary policy propagation in normal times\cite{jimenez2014hazardous}\cite{gertler2015monetary}\cite{nelson2015contractionary}and optimal policy \cite{chadha2014note}\cite{ellison2014unconventional}\cite{de2014risk}. Nevertheless, results tend to be model specific and may dependent on other aspects of the policy and regulatory environment\cite{svensson2014inflation}. Furthermore, macro-prudential concerns does not seem to have played a major role in monetary policy determination during the estimation period considered in Sections 2 and 6\cite{fuhrer2008eyes}. Fostering financial stability does not appear to have been part of the monetary policy frameworks in any nations considered in Section 4 \cite{rotemberg2014federal}. There are several example of leading policymakers argue Central Banks should not try to burst bubbles\cite{bernanke2001should}\cite{posen2006central}. These concerns support my decision to work with a benchmark efficient financial markets setup. The evidence about Quantitative Easing is the main reason why I exclude the post-crisis period.}\footnote{It is worth noting that even though they are associated with inflation targeting Taylor rules seems to explain policy-making just as well even when the Central Bank professes to be following a different policy regime such as money targeting see \cite{bernanke1997does}\cite{clarida1998monetary}. This increases my confidence in applying a micro-founded inflation targeting model to the United States which has never been an explicit inflation targetter;later on in the paper.} popularly formulated as follows:
\begin{equation}i_{t}= i^{*}_{t}+a_{\pi}\tilde{\pi}_{t} +a_{y}\tilde{y}_{t}\end{equation}
Where $a_{\pi}> 1$ and $a_{y}\geq 0$ $\tilde{\pi}$ refers to the inflation gap the difference between observed inflation $\pi_{t}$ and its target $\pi_{t}^{*}$ $\tilde{y}_{t}$ is the output gap defined as the difference between the actual output $y_{t}$ and its natural or potential rate $\bar{y}_{t}$. This potential rate will be proxied by either deterministic time trend or a stochastic trend derived from conventional settings of the Hodrik-Prescott filter\cite{hodrick1997postwar}. Here the potential rate is given by the flexible price equilibrium derived above. A new definition of the theoretical concept of the potential rate will be given in Section 5. $i^{*}_{t}$ is the natural nominal rate of interest. It comprises a natural real rate $r^{*}_{t}$ and the inflation target $\pi_{t}^{*}$.\footnote{For generality I allow for a time-varying inflation target. In this section and the associated Appendix A as in Section 6 I exploit this feature when testing my model on United States data. I do so because the US has never had a legal inflation target and there are suggestions that monetary policy behavior has changed over time e.g. with changes in government or Federal Reserve Chairperson. In section 4 however, when testing optimal policy in countries with a legal inflation target I take the inflation target as fixed throughout the period in the main specifications.} Essentially the rule states that nominal interest rates should rise more than one-for-one (so the real interest rate increases) with inflation to ensure a stable solution. There is also an allowance but not a requirement for interest rates to smooth the output gap\footnote{There are several differences from Taylor's original rule.
}. There are no lags or leads in the relationship so contemporaneous economic developments determine current policy. This is intuitive, there is no benefit to conditioning on past variables in a forward-looking model\footnote{The system is 'forward-looking' in the sense that agents have rational expectations and there are no state variables.}. In the particular but instructive case with no persistence to exogenous shock processes the future of a forward-looking system will be identical in expectation to the steady state. In a forward-looking system one instrument per period should be sufficient to implement optimal policy.
\par Note, we are not required to believe that interest rates are determined simultaneously with output and inflation. In subsequent sections interest rates will be determined before output and inflation are realized. As the rule is linear with rational expectations the expectation errors will pass into the white noise error term- providing in fact the main rationale for the regression error itself.
I interpret the policy rule as a description of actual policy for a Central Bank with rational expectations\footnote{There is some controversy about whether Central Banks do display rational expectations see \cite{romer2008fomc} and \cite{ellison2012defense}. However, the conclusion of excess persistence appears robust to and even strengthened by the use of internal forecasts understood to be the best forecast of the evolution of the economy see \cite{coibion2012target}. In any case the significance and substantiveness of persistence appears too large to be explained solely by deviations from rational expectations.} and test its fit to the data. Hence, I view the presence of serial correlation and a significant lagged interest rate term as a rejection of Taylor rule as a description as a model of real-world policy-setting.
\subsection{Price Dispersion} To understand the dynamics of the system we need to linearize it, for business cycle interpretation we need to choose a point around which to carry out this perturbation that could be interpreted as a long-run equilibrium. Economists have always used the \emph{non-stochastic steady state} that would prevail if the economy were never subject to shocks or expected to be so. The non-stochastic steady state is equivalent to the flexible price equilibrium. The point of this paper is that this approach is misguided providing an erroneous interpretation of business cycle dynamics. \\The New Keynesian framework differs from Neoclassical models by preventing every firm re-optimizing prices in every period. This allows for the possibility of \emph{price rigidity} where today's price level contains reset prices from previous periods, as well as the current optimal reset prices. This means there can be \emph{price dispersion} with implications for resource allocation and welfare.
\\We characterize price dispersion using the demand aggregator \begin{equation}\Delta= \int_{i}{(\frac{p_{i}}{P})}^{-\theta}\; \mathrm{d}\mu_{i}\end{equation}
It appears in the market-clearing condition
\begin{equation} \Delta_{t}C_{t}=A_{t}L_{t}\end{equation}
Under Calvo pricing $\Delta$ evolves according to the following relationship:
\begin{equation}\Delta_{t}=(1-\alpha){(\frac{p_{t}^{*}}{P_{t}})}^{-\theta}+ \; \alpha {(1+ \pi)}^{\theta} \Delta_{t-1}\end{equation}
Using equation (16) to eliminate the reset price we find:
\begin{equation} \Delta_{t}=(1+\pi)^{\theta}\frac{(1+\pi-\alpha)^{\frac{\theta}{\theta-1}}}{(1-\alpha)^{\frac{1}{\theta-1}}} + \alpha(1+\pi)^{\theta}\Delta_{t-1} \end{equation}
Price dispersion $\Delta$ is a strictly convex function of inflation $\pi$ and is a persistent process with the degree of persistence increasing in the degree of price rigidity $\alpha$. These two features underpin the rest of the analysis in this paper.
To understand the dynamics it is necessary to log-linearize the system. The following property is remarkable.
\begin{lemma}Around the non-stochastic steady state $(\pi_{t}, \Delta_{t})=(0,1)$ the log-linear approximation $\hat{\Delta_{t}}=0$ for all values of inflation $\pi_{t}$. \end{lemma}
\begin{proof}Simply carry out log-linearization of the the system $(\Pi, \Delta)$ where $\Pi_{t}= \frac{P_{t}}{P_{t-1}}$ and used the fact that $\hat{\Pi_{t}}=\pi_{t}$ to obtain:
\begin{equation}\hat{\Delta}_{t}= \varkappa(\pi, \Delta)\pi_{t}+ \alpha(1+\pi)^{\theta} \hat{\Delta_{t}}\end{equation}
where $(\pi, \Delta)$ are the values about which the approximation is being taken. $\varkappa(\pi, \Delta)$ can be expressed as follows:
$$\varkappa(\pi, \Delta)=\frac{\theta (1+\pi)^{\theta-1}\pi }{\bar{\Delta}(\theta-1)(1-\alpha)^{\theta-1}}\varpi(\pi,\Delta)$$
$$ \varpi(\pi,\Delta)=(1+\pi-\alpha)^{\frac{\theta}{\theta-1}}(\theta-1) + (1+\pi)(1+\pi-\alpha)^{\frac{1}{\theta-1}} +\alpha (\theta-1)(1-\alpha)^{\theta-1}\Delta $$
$\varkappa(\pi,\Delta)$ is strictly positive except crucially where $\pi=0$ and the inflation term drops out as $\varpi(0,\Delta)=0$. This follows from the fact that $1+\pi-\alpha > 0$, is the requirement for the non-negativity of the reset price $p^{*}$ from price level construction equation (16). Intuitively, in a New Keynesian model the prices that stay fixed put a lower bound on the admissible rate of inflation\footnote{This result can be modified to models with nominal indexation schemes considered in the literature with the minimum bound being a function of the indexing variable $\pi_{t-1}$ or trend inflation $\bar{\pi}$. This will not come into play in my set-up because the trend rate of inflation will be set to zero for congruity with the benchmark model and to allow me to fully overturn the 'Divine Coincidence'. Alternatively $\pi_{t}$ can be viewed as the detrended rate of inflation with each firm indexing to trend in every period. There is very little empirical support for nominal price indexation. However, I will leave relaxation of this assumption to future research. }. By assumption of a non-stochastic steady state $\hat{\Delta_{t-1}}=0$ hence as the sum of two zero terms $\hat{\Delta_{t}}=0$ regardless of the value of inflation $\pi_{t}$.
\end{proof}
\subsubsection{Flexible Price Equilibrium}
If all firms can adjust prices in every period corresponding to the limiting case $\alpha=0$ (15) reduces to:
\begin{equation}\frac{p^{*}_{t}}{P_{t}}=\frac{\theta}{\theta-1}\varphi_{t}=\bar{\mu}\varphi_{t}\end{equation}
Where $\bar{\mu}$ is the markup.
With flexible prices all firms charge the same price so $p^{*}_{t}=P_{t}$ hence we can solve for the marginal cost which is inversely proportional to the markup . As the flexible price equilibrium has to be interpreted as a non-stochastic steady-state of the New Keynesian model we must set the disturbance term to its equilibrium value so that $\beth_{t}= \bar{\beth}=\frac{1}{\mu}$. Likewise, with $\Delta=1$ the market-clearing relation (19) simplifies to $C_{t}=\bar{A}L_{t}$ combining with the labor supply relation it is possible solve for the steady state. A natural rate of interest $\bar{r}$ set by the Euler equation brings about equilibrium over time. I am not going to do so here, as the non-stochastic steady state is a special case of the stochastic steady state which I shall characterize in Section 6. The only source of inefficiency in this steady-state comes about from the markup. This inefficiency can easily be corrected by an appropriate tax and subsidy scheme\footnote{The most plausible such scheme would be a sales tax rebated lump sum to households. The results in the next section imply that any tax scheme to mitigate the two inefficiencies from price dispersion and imperfect competition would have to vary over time with the level of price dispersion. This does not seem a good characterization of real world fiscal policy and would add unnecessary complexity to my model.} to leave a Pareto efficient allocation I have left it out for comparability with my model where I prove that Pareto efficiency is not implementable and use an alternative welfare criterion to characterize optimal policy.
\subsection{New Keynesian Phillips Curve and the Forward-Solution}
Analysis of the NK model begins with the recursive marginal cost Phillips curve. It describes how current inflation is determined by the present deviation of real marginal costs from steady state and the expectation of next periods inflation. It is derived from combining the optimal price-setting condition (15) and the price level construction equation (16) and log-linearizing\footnote{Consult \cite{walsh2010monetary}for step-by-step derivations of this and other aspects of the basic New Keynesian model.}.
\begin{equation}\pi_{t}= \kappa \hat{\beth_{t}} + E_{t}\beta \pi_{t+1} \end{equation}
Where $\kappa= \frac{(1-\alpha)(1-\beta \alpha)}{\alpha}$ the corresponding infinite horizon forward solution for inflation in terms of future marginal costs is \begin{equation}\pi_{t}=\kappa \sum_{i=0}^{\infty}\beta^{i}E_{t}\hat{\beth}_{t+i}\end{equation}
It is convenient to eliminate the unobserved variable $\hat{\beth_{t}}$ representing real marginal costs with an expression for the output gap which can in principal be estimated from macroeconomic data. First linearize the marginal cost function (12)
\begin{equation}\hat{\beth_{t}}=\hat{w}_{t} -\hat{p}_{t}-\hat{a}_{t}\end{equation}
Next linearize the production function
\begin{equation}\hat{a}_{t}=\hat{y}_{t}-\hat{l}_{t}+ \hat{\Delta}_{t}\end{equation}
With the assumption of a non-stochastic steady state we know that $\hat{\Delta}_{t}=0$ now using optimal labour supply condition (8) we find
\begin{equation}\hat{\beth_{t}}= (\sigma + \eta)[\hat{y}_{t}-\frac{1+\eta}{\sigma + \eta}\hat{a}_{t}]\end{equation}
Now the current New Keynesian model uses the \emph{efficient output gap} $y^{e}$ defined as the log-difference between actual output $\hat{y}_{t}$ and the flexible price equilibrium output $\hat{y}_{t}^{f}$
\begin{equation}y_{t}^{e}=\hat{y}_{t}- \hat{y}_{t}^{f}\end{equation}
Note that flexible price equilibrium output is $\hat{y}_{t}^{f}=\frac{1+\nu}{\sigma + \nu}\hat{a}_{t}$ so the productivity term $\hat{a}_{t}$ cancels out of the marginal cost expression which is proportional to the output gap
\begin{equation}\hat{\beth_{t}}=(\sigma+\eta)y_{t}^{e} \end{equation}
This yields the conventional recursive Phillips curve where inflation is a function of the efficient output gap and next periods expected inflation
\begin{equation}\pi_{t}=\omega y_{t}^{e} + \beta E_{t}\pi_{t}\end{equation}
Where $\omega=(\sigma+\eta)\frac{(1-\alpha)(1-\beta \alpha)}{\alpha}$. Its forward solution is \begin{equation} \pi_{t}=\omega \sum_{i=0}^{\infty}\beta^{i}E_{t}y_{t}^{e}\end{equation}
Accepting that we cannot have a perfect fit to test the model we need error terms in each equation therefore the final system are the following Euler, Taylor and Phillips curve triplet. In this section I neglect the possibility of shocks to the natural rate $\bar{r}_{t}=\bar{r}$ this is picked up in Section 5 where it is shown to be inconsequential for the behavior of inflation or the efficient output gap.
\begin{equation}y^{e}_{t}= E_{t}y^{e}_{t+1}-\frac{1}{\sigma}(i_{t}-\bar{r}-E_{t}\pi_{t+1})+ u^{1}_{t}\end{equation}
\begin{equation}i_{t}=\bar{r}+ a_{\pi}\pi_{t} + a_{y}y_{t}^{e}+u^{2}_{t}\end{equation}
\begin{equation}\pi_{t}=\omega y_{t}^{e} + \beta E_{t}\pi_{t+1} + u^{3}_{t}\end{equation}
To make the calculation of the forward solution easier I follow the convention of eliminating the policy rule to give the matrix system:
\begin{equation}X_{t+1}=\begin{bmatrix}\pi_{t+1} \\ y_{t+1}^{e}\end{bmatrix}=AX_{t}+Bu_{t}\end{equation}
Where $X_{t}=\begin{bmatrix}\pi_{t} \\ y_{t}^{e}\end{bmatrix}$ $A=\begin{bmatrix} \beta^{-1}& -\omega \beta^{-1}\\ \sigma^{-1}(a_{\pi}-\beta^{-1}) & 1+ \sigma^{-1}(a_{y}+ \omega \beta^{-1})\end{bmatrix}$
\\ $B= \begin{bmatrix} 0 & 0 & 1 \\ -1 & \sigma^{-1} & \sigma^{-1}\beta^{-1} \end{bmatrix}$ $(u_{t})'=\begin{bmatrix} u^{1}_{t} & u^{2}_{t} & u^{3}_{t}\end{bmatrix}$ \\ From the general solution \begin{equation}X_{t}=\sum_{i=0}^{\infty}-A^{-(1+i)}Bu_{t+i}\end{equation} The three variables can be expressed as sums of the expected future shock terms. \begin{equation}\pi_{t}=\sum_{k=0}^{\infty} \zeta_{\pi}^{1}u_{t+k}^{1}+\zeta_{\pi}^{2}u_{t+k}^{2}+\zeta_{\pi}^{3}u_{t+k}^{3} \end{equation}
\begin{equation}y_{t}=\sum_{k=0}^{\infty} \zeta_{y}^{1}u_{t+k}^{1}+\zeta_{y}^{2}u_{t+k}^{2}+\zeta_{y}^{3}u_{t+k}^{3} \end{equation}
\begin{equation}i_{t}=\bar{r} +
\sum_{k=0}^{\infty} \zeta_{i}^{1}u_{t+k}^{1}+\zeta_{i}^{2}u_{t+k}^{2}+\zeta_{i}^{3}u_{t+k}^{3} \end{equation}
The details of the $\zeta$ coefficients are not important here and are reported in Appendix A. The Blanchard-Kahn condition\cite{blanchard1980solution} that both eigenvalues of the matrix $A$ lie outside the unit circle is required for the series to converge to a unique solution. \subsection{Persistence Problem} The persistence problem of the New Keynesian model lies in the property of the errors and expectations. To link the two it is assume imperfect information.
\begin{assumption}The central bank has imperfect information about the present state of the economy- which is resolved at the end of the period after they have chosen their behavior.\end{assumption}
This restriction seems realistic for example quarterly output figures are released soon after the end of the relevant quarter,so the central bank only observes present state of the economy with error this allows us to interpret the error term in the Taylor rule $u_{t}^{2}$ as a central bank expectation error.\footnote{In reality private sector agents should be equally if not more uncertain about the state of the economy. However, we do not need to assume imperfect information to interpret error terms $u_{t}^{1}$ and $u_{t}^{3}$ as expectation errors since these equations already contain the unknown variable $E_{t}\pi_{t+1}$. Indeed introducing imperfect information to the private sector would require the unnecessary complication of a signal extraction problem.}
\begin{proposition}Each error term $u_{t}^{i}$ must be white noise i.e. $E u_{t}^{i}\vert \mathcal{I}_{t-l}=0, \; \forall t >0, \; l\leq t$ \end{proposition} Here $\mathcal{I}_{T}$ is the information set provided by the model at time $T$ or else the model is observationally equivalent to a model with bounded rationality (irrational expectations) and is therefore not identified in the encompassing class of DSGE models.
This result follows from the fact that when we estimate the model with macroeconomic data we cannot observe expectations. Therefore we must use the mathematical expectation given by the structural model denoted by a superscript $M$. Take the example of the Phillips curve for concreteness $E_{t}\pi_{t+1}=E_{t}^{M}\pi_{t+1}$ . Assumption 1 makes this easily applicable to both other equations in the system.Using $S$ to denote 'subjective' and assuming there are no other I can subsume the Phillips curve error term $u^{3}_{t}$ into the subjective expectation yields $E_{t}^{S}\pi_{t+1}=E_{t}^{M}\pi_{t+1}+ u_{t}^{3}$. Now if $u_{t}^{3}$ is not white noise then the agent is making systematically incorrect predictions and we have observational equivalence with a bounded rationality (irrational expectations) model. Therefore the rational expectations model is not identified\footnote{An alternative strategy would be to relax rational expectations and use survey data to measure inflation expectations \cite{roberts1995new}\cite{roberts1997inflation}\cite{coibion2010testing}, although I do not pursue this here }.
\\For this reason apart from the technology shock, that as shown above does not feature in the final solution of the benchmark New Keynesian model, when I estimate my own model in Section 7 I keep all other shocks white noise for congruity with the rational expectations framework. It is retained however, in Sections 5 and 6 where optimal policy results are derived and tested. In Section 5 more sophisticated economic interpretations of shocks in particular $u^{3}_{t}$ will be discussed.
\begin{assumption}$u^{i}_{t}$ is an iid random variable with $E \lvert u_{t} \rvert < \infty$ \end{assumption}
This restriction will be used both to derive the conditions for existence of a solution to (33)-(35) and construct arguments about the consistency properties of various estimators. At the cost of additional complexity weaker conditions could be used to accommodate dependence of higher moments. In the empirical section, processes with higher order dependence such as GARCH will be considered- all of which will admit solutions for the variables $(y_{t}^{e}, i_{t},\pi_{t})$.
\\Note that I have not ruled out time trends or unit roots in actual output $y_{t}$- provided that any non-stationary process impacts actual output $y_{t}$ and flexible price output $y^{f}_{t}$ equally then the efficient output gap $y^{e}_{t}$ could be stationary. In Section 3/(Appendix?) a battery of tests reject a unit root in inflation in both panel and time series applications. In Section 6/(Appendix?)reports similar findings for interest rates.\\ In empirical application (section 6 or 7) the output gap variable will automatically be stationary because of the detrending procedure. It is worth noting that time series of output distributions in OECD countries are well-described by the exponential power family of distributions- for which all power moments are defined- see \cite{christiano2007fit} \cite{fagiolo2007output}\cite{fagiolo2008output}\cite{fagiolo2009detrending}\cite{franke2015fat}. It is natural to expect that Keynesian stabilization policy would make the tails of the output gap distribution thinner than that of raw output\footnote{In fact \cite{fagiolo2009detrending} shows that the exponential power family distribution is able to encompass output fluctuations across different filtering specifications- including unfiltered. Several cited studies bolster their case with similar findings with similar findings for other macroeconomic time series such as real wage and employment series that would inherit their time series properties from the three core variables in the model. Consult \cite{agro1995maximum} \cite{bottazzi2003common} \cite{bottazzi2004subbotools} for modern expositions of this distribution class. It is sometimes called the Subbotin family of densities after its creator or the Generalized Normal Distribution Version 1.} Indeed I am able to confirm this intuition in Section 5/6 when I derive my own Keynesian model and characterize optimal policy. When I simulate and test my own model in Section 6/7 I use a normal distribution for comparability with existing research. Nevertheless, persistence results in Section 3 are robust to error distributions in this class.
Imposing these two assumptions allows me to characterize the solutions for existence and uniqueness of a solution to this linearized system.
\begin{proposition}There exists a solution linear equation system (33)-(35) in which all three major macroeconomic variables $(y_{t}^{e}, i_{t},\pi_{t})$ as serially uncorrelated processes. If all the inverse eignevalues of the matrix $A$ lie outside the unit circle this solution is unique.\end{proposition}
\begin{proof}For illustration consider $\pi_{t}$ analagous arguments can be made for the other two variables. As the system is ergodic the autocovariance generating function is symmetric so $Cov(\pi_{t}, \pi_{t-l})=Cov(\pi_{t},\pi_{t+l})$ $\forall l$ so it suffices to show that $Cov(\pi_{t}, \pi_{t-l})=0$ for arbitrary lag length. From the white noise error assumption we know that $E_{t} u_{t+k}^{\pi}\vert \mathcal{I}_{t}=0 \; \forall k >0$ this means that current inflation can be expressed a function of only contemporaneous shocks $\pi_{t}= \zeta_{\pi}^{1}u_{t}^{1}+\zeta_{\pi}^{2}u_{t}^{2}+\zeta_{\pi}^{3}u_{t}^{3}$. Applying the same argument to period $t-l$ implies $\pi_{t-l}$ is a function of time $t-l$ errors. Finally the white noise assumption means that $\forall i$ $\forall l$ $E(u_{t-l}^{i}u_{t}^{i})=0$ so $Cov(u_{t-l}^{i},u_{t}^{i})=0$. Now the result follows from noting that $Cov(\pi_{t},\pi_{t-l})=\sum_{i=0}^{3}\sum_{j=0}^{3}Cov(u_{t}^{i},u_{t-k}^{j})$ where every term in the summation is zero. \end{proof}
The no persistence result arises because the model lacks either intrinsic or extrinsic persistence. It lacks intrinsic persistence because the current value of the state variables $X_{t}$ can be written as a function of just the current and future values of the shock process $u_{t}$ independent of their past realizations.\footnote{In the frequency domain this corresponds to the model acting as a neutral filter preserving the correlation spectrum in the error terms- a point first made by \cite{cogley1995output}.} It lacks extrinsic persistence because the errors are observationally equivalent to expectation errors which means they cannot be persistent.
It will carry over to forward-looking policy rules that contain future expected inflation and output gap terms because they can be collapsed into the contemporaneous form (33) as the expectation of future variables will all be zero although verifying the existence condition on the eigenvalues will likely require a numerical computation routine. Appendix A generalizes these results to the richer Generalized Taylor price setting framework and models with capital.
\\ The absence of persistence in the New Keynesian model contrasts with RBC models where the relevant solution variable is simply output $y_{t}$- which can inherit persistence from technology or news about future productivity developments \cite{kydland1982time} \cite{beaudry2006stock} \cite{beaudry2007can} \cite{jaimovich2009can} \cite{walker2011information} \cite{schmitt2012s}. Of course output itself inherits this persistence, it moves one-for-one with the efficient output in the corresponding RBC model so that the output gap stays constant. This highlights the point that Neo-Classical variables and associated shocks do not necessarily appear in the New Keynesian solution for business cycle dynamics. I discuss this point further in the optimal policy Section 5. Therefore, novel New Keynesian features are needed to improve the fit of the New Keynesian model. It is for this reason that this paper introduces a new feature first-order price dispersion, unique to environments with price rigidity, to the basic New Keynesian model. \\ The econometric implications are unfortunate. The model cannot be used for forecasting. The key macroeconomic policy variable inflation is white noise. Neither can the model contribute to forecasting output or interest rates- over and above purely statistical procedures or classical models of the natural rate.To confirm: the New Keynesian model is not yet useful for policy. This is particularly unfortunate as the ability to forecast short-term fluctuations is the yardstick against which New Keynesian macroeconomists ask to be judged, as the following quote makes clear:\\ "We focus on forecastable movements in our variables as we because it is arguably that these constitute the essence of what it means for a variable to be 'cyclical'"\\In the articles conclusion (p87) the authors make their point more strongly rejecting a suite of Real Business Cycle models on the following grounds: "We have demonstrated that the forecastable movements in output, consumption and hours [the three main variables in the Real Business cycle framework]"- what we would argue is the essence of the 'business cycle'- are inconsistent with a standard growth model disturbed solely by random shocks to the rate of technical progress." \cite{rotemberg1996real} (p71)\footnote{see also the abstract of \cite{blinder1981inventories} for a similar definition}
New Keynesian economics is not living by its professed econometric standards.For those with sufficient perspective this is all rather reminiscent of the evolution of Classical economists attitudes towards econometrics encapsulated in the following quote by Nobel laureate Thomas Sargent about fellow laureates Edward C (Ed) Prescott and Robert (Bob) Lucas:
"My recollection is that Bob Lucas and Ed Prescott were initially very enthusiastic about rational expectations econometrics. After all, it simply involved imposing on ourselves the same high standards we had criticized the Keynesians for failing to live up to. But after five years of doing likelihood ratio tests on rational expectation models. I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models." \cite{sargent2005interview}
Worse still the basic model can not even be estimated with data from the three $Q_{t}$ variables because it is not \emph{identified} in the sense of \cite{lehmann1998theory} or \cite{dufour2008identification}.
\begin{remark}The structural parameters of the model defined by (33)-(35) are not identified.\end{remark}
\begin{proof}Applying proposition 2 simplifies the system to:
$$y^{e}_{t}= \frac{1}{\sigma}(i_{t}-\bar{r})+ u^{1}_{t}$$
$$i_{t}=\bar{r}+ a_{\pi}\pi_{t} + a_{y}y_{t}^{e}+u^{2}_{t}$$
$$\pi_{t}=\omega y_{t}^{e} + u^{3}_{t}$$
Define the vector of endogenous variables $Q_{t}=(y^{e}_{t},i_{t},\pi_{t})$ and the parameter vector $\theta=(\gamma,a_{\pi},a_{y},\omega,\beta,\bf{\lambda})$ where $\gamma=\frac{1}{\sigma}$ and $\bf{\lambda}$ is the collection of parameters governing the joint distribution of the three error terms. $\Theta$ denotes the sample space of the parameters formed of the product space $(\Gamma \times {A}_{\pi}\times {A}_{y}\times \Omega \times B \times \Lambda)$ where for example ${A}_{\pi}$ is the set of admissable values for the parameter $a_{\pi}$. In the common case of normally distributed errors and unconstrained optimization $\Theta ={\Re}^{11}$ with $\Lambda$ consisting of the distinct terms of the variance-covariance matrix of the error terms. \\The crucial object is $f_{\theta_{0}}(Q_{t})$ the joint probability distribution induced by the particular parameter vector $\theta_{0} \in \Theta$ at time $t$. Recall that a parameter $\theta$ is identified when there is a one-to-one mapping to the probability distribution $\theta \rightarrow f_{\theta}(Q_{t})$ at every time $t$.
\\Suppose the model were identified and proceed by the counterexample. Since $\beta$ does not appear in the reduced form I can construct a counterexample with any $(\gamma,a_{\pi},a_{y},\omega,\lambda) \in (\Gamma \times {A}_{\pi}\times {A}_{y}\times \Omega \times \Lambda )$ and $\beta_{1}, \beta_{2} \in B$ with $\beta_{1} \neq \beta_{2}$ let $\theta_{1}=(\gamma,a_{\pi},a_{y},\omega,\beta_{1},\lambda)$ and $\theta_{2}=(\gamma,a_{\pi},a_{y},\omega,\beta_{2},\lambda)$ as $f_{\theta_{1}}=f_{\theta_{2}}$ $\forall t$ but $\theta_{1} \neq \theta_{2}$ contradicting the hypothesis of a one-to-one mapping.
\end{proof}
For concreteness consider the popular Generalized Method of Moments Estimator \footnote{The approach was developed by \cite{hansen1982large} applied to rational expectations modeling by \cite{hansen1982generalized} and based upon method of moments estimation procedure first employed by Karl Pearson.} for the structural parameter vector $R_{3 \times 3}$ of the relationship between $Q_{t}$ and $E_{t}Q_{t+1}$. It is necessary to have an $m \times 1 \; m > 3$ vector of available instruments $Z_{t} \in \mathcal{I}_{t}$ consistent with the orthogonality condition and associated estimator:
$$E_{t}[Z'_{t}(Q_{t}-Q_{t+1}R)]=0$$
To allow for the case of over-identification where the number of potential instruments in $Z_{t}$ exceeds the number of moment conditions $m > 3$ in the basic model I minimize the quadratic form of the orthogonality conditions $H_{T}(R)=[T^{-1}(Z'_{t}(Q_{t}-Q_{t+1}R))W_{T}(Z'_{t}(Q_{t}-Q_{t+1}R))']$. Here $W_{T}$ is a weighting matrix dependent on $\theta$ that will turn out to be inversely proportional to the variance-covariance matrix of the orthogonality conditions, as previously $T$ is the number of time observations. Optimization with continuous differentiability in $R$ yields the GMM estimator\footnote{Consult \cite{hansen1982generalized} or a textbook such as \cite{hamilton1994time} for a full exposition}
$$\hat{R}=(Q'_{t+1}Z_{t}W_{T}Z'_{t}Q_{t+1})^{-1}Q'_{t+1}Z_{t}W_{T}Z_{t}Q_{t}$$
with $\hat{R}=(Q'_{t+1}Z_{t})^{-1}Q'_{t+1}Q_{t}$
corresponding to the just-identified case where the orthogonality conditions are solved exactly.\\However, applying proposition 2 again shows that this estimator is not defined because $Q'_{t+1}Z_{t}=0$ meaning that the first matrix is non-invertible. This follows because $Q_{t+1}$ is comprised entirely of expectation errors which must be uncorrelated with any variable belonging to the information set at time $t$, $\mathcal{I}_{t}$ to which $Z_{t}$ belongs. Hence there are no valid instruments for the expectations of future macroeconomic variables in this system.
This implies the structural parameters $\theta$ are not identified. For proof suppose the converse that $\theta$ were identified (i.e. there were sufficient valid instruments) assumption 2 would bound the expected deviation from the orthogonality conditions which would be sufficient to invoke \cite{hansen1982generalized} to prove weak convergence (in probability) of the GMM estimator $\hat{\theta}_{T}\rightarrow \theta$\footnote{ }. Now consider the solution for the reduced form parameters in terms of their structural counterparts.
$$R_{11}=\frac{1}{1+\gamma(a_{y}+a_{\pi})}>0$$
$$-\infty < R_{12}=\frac{\gamma(a_{\pi}\beta-1)}{1+\gamma(a_{y}+a_{\pi})}< \infty$$
$$R_{13}=0$$
$$R_{21}=\frac{a_{y}+a_{\pi}\omega}{1+\gamma(a_{y}+a_{\pi})}>0$$
$$R_{22}=0$$
$$-\infty< R_{23}=\frac{a_{y}\gamma(a_{\pi}\beta-1)+a_{\pi}[\omega \gamma(a_{\pi}\beta - 1)+\beta(1+\gamma(a_{y}+a_{\pi}\omega))]}{1+\gamma(a_{y}+a_{\pi})}< \infty$$
$$R_{31}=\frac{\omega}{1+\gamma(a_{y}+a_{\pi})}>0$$
$$R_{32}=0$$
$$-\infty< R_{33}=\frac{\omega \gamma(a_{\pi}\beta - 1)+\beta[1+\gamma(a_{y}+a_{\pi}\omega)]}{1+\gamma(a_{y}+a_{\pi})}< \infty$$
Note that each reduced form parameter $R_{ij}$ is a composite of continuous functions and is therefore a continuous function of the structural parameters $\theta$ see \cite{aliprantisborder}. Denote this function by $\digamma_{ij}$ so $\digamma_{ij}=R_{ij}$. Therefore by the continuous mapping theorem of \cite{mann1943stochastic} $\lim_{T \rightarrow \infty}\hat{\digamma_{ij}}=R_{ij}$ \emph{in probability}. This would create a one-to-one mapping between reduced-form parameters and probability distributions over the observables $Q_{t}$ via the probability limits of the reduced form,the probability limits of the structural parameters and the structural parameters themselves. Hence the reduced form parameters would be identified- a contradiction. Therefore the structural parameters must be unidentified.\\ The notion that the New Keynesian Phillips curve might be weakly identified is not new \cite{woodford1994nonstandard}\cite{mavroeidis2004weak}. Although I am unaware that the possibility of having \emph{no identification} in the New Keynesian model has ever been set out quite so clearly. In the next section I present a novel test of this unidentified New Keynesian Phillips curve. I document the strongest evidence yet of inflation persistence- which I use as a stylized fact in subsequent model building. \\
\section{Price Dispersion, Approximations and Welfare} The crucial difference between a stochastic and a non-stochastic steady state lies in the behavior of price dispersion. I consider a general setting which can feature both continuous and discrete shock processes. I find that price dispersion which is absent in a non-stochastic model, will always be present in a plausible stochastic environment.
\\ In Subsection 3.1 I define a concept of \emph{inefficient price dispersion} that can arise in New Keynesian models. I then prove that it will arise in all but a few highly stylized New Keynesian models. I then characterize its dynamic behavior in general and in the two most popular variants of the New Keynesian models those with Calvo and Taylor contracts. Although, not conceptually difficult to prove and extremely general in scope these results overturn existing thinking. For example, the following quote in which the authors seem to believe that New Keynesian models cannot generate price dispersion unless there is a non-zero rate of trend inflation. A claim which I prove to be false. \newline "... many New Keynesian models such as those in \cite{clarida1998monetary} or \cite{woodford2011interest}, generate price dispersion if and only if there is inflation ... But the data suggests there is price dispersion during periods of zero or low inflation (something first noted in \cite{campbell2014rigid}.) This suggests it is important to work with (non-New Keynesian) models that can deliver price dispersion without inflation." \cite{head2012sticky} (p. 942)
\\ In Subsection 3.2 I apply these results to several popular topics in New Keynesian modeling including zero lower bound, stochastic volatility and regime switching models. In each case I am able to show that major papers are not just mathematically in error but that their results provide a qualitatively misleading account of macroeconomic dynamics under the relevant mechanism. I discuss the (often adverse) implications for these models ability to match empirical evidence.
\\Subsection 3.3 characterizes price dispersion as a random variable in relation to underlying stochastic processes in the economy. In application these results invalidate the basic New Keynesian Phillips curve and its forward solution, equations (24) and (32). Subsection 3.4 shows that inefficient price dispersion constitutes a new form of market failure. I show how it originates on the \emph{producer side} of the economy and link it to the failure of Acemoglu's representative firm theorem \cite{acemoglu2008introduction} I demonstrate that it cannot be corrected by tax policy even when unrealistic lump sum tax and subsidy combinations were allowed.\\ In Subsection 3.5, I link topological conjugacy to the Lucas critique and show that representing a New Keynesian model by a log-linear approximation about the non-stochastic steady state induces a failure to meet the Lucas critique. This is a very serious charge against existing New Keynesian models for which passing the Lucas critique is a \emph{raison d'\^{e}tre}. Finally, in Subsection 3.6 derives a correct notion of \emph{stochastic equilibrium} which exists in all New Keynesian models.The equilibrium concept could be applied elsewhere in economics but I do not pursue this point here.I use the mathematical concept of \emph{topological conjugacy} to prove why it is legitimate to represent the dynamics of the economy close to equilibrium by taking a log-linear approximation about the center of this distribution from which I derive a correct Phillips Curve.
\subsection{Non-Stochastic Environments}
The following powerful result derived directly from the construction of the price level tells us that the measure of price dispersion $\Delta$ defined by equation (18) in Section 2 is strictly greater than unity unless all firms set the same price.
\begin{lemma}$\Delta\geq 1$ with $\Delta=1$ if and only if $p_{t}(i)=P_{t},\;\forall \;i$.\end{lemma}
Where $p_{t}(i)$ is the price set by any firm $i$ at time $t$.
The lengthy proof is contained in Appendix A the first part is a familiar application of Jensen's inequality, possible because the demand system is sufficiently convex and the second a small extension exploiting strict convexity. The extension to other New Keynesian models that use the constant elasticity of substitution preference scheme is simple. I do so in Appendix A by modifying the probability measure used to aggregate the various prices to obtain the price level to correspond to three common pricing models the basic Calvo model used here, the Calvo model with indexation to trend inflation used by \cite{yun1996nominal} and the General Taylor Economy of \cite{taylor1993macroeconomic} \cite{coenen2007identifying} \cite{dixon2010can} \cite{dixon2011contract}\cite{dixon2012generalised} which encompasses a wide range of pricing models and can be fitted exactly to match cross-section price distributions.
\footnote{\cite{dixon2012unified} shows that the Generalized Taylor model can approximate arbitrarily well the Generalized Calvo used by authors such as \cite{wolman1999sticky}\cite{dotsey2006pricing} \cite{sheedy2010intrinsic}, the multiple Calvo associated with \cite{carvalho2006heterogeneity} \cite{de2011aggregation}, as well as the familiar simple Taylor and Calvo.}
The result is not specific to one demand systems. To allow heterogeniety between firms it is necessary to define a concept of \emph{efficient} price dispersion. Efficient price dispersion measures the price dispersion that would exist in the corresponding flexible price model- where all prices could be reset in every period at no cost with perfect information- as in the RBC framework.It takes the form\begin{equation}\Delta_{t}^{*}=\int_{i}F_{i,t}(\frac{p^{**}_{i,t}}{P_{t}})\mathrm{d}{\mu_{i,t}}\end{equation} where $P_{t}=\int_{i}p_{i,t}F_{i,t}(\frac{p^{**}_{i,t}}{P_{t}})\mathrm{d}{\mu_{i,t}}$ is the price level.$F_{i,.}$ represents the demand curve for firm $i$ and is homogeneous of degree zero in prices- as in basic consumer theory.
This allows us to define \emph{inefficient} price dispersion as the ratio between \emph{actual} and \emph{efficient} price dispersion. \begin{equation}\Delta_{t}=\frac{1}{\Delta_{t}^{*}}\int_{i}F_{i,t}(\frac{p^{**}_{i}}{P_{t}})\mathrm{d}{\mu_{i,t}} \end{equation}
This definition is consistent with the parametric form of $\Delta_{t}$ in the Calvo model given in (18) where I assumed firms faced the same demand and technology with no idiosyncratic shocks, which makes any price dispersion inefficient. \\ As before $i$ is a number used to index an individual firm. Since I am allowing heterogeniety among firms' optimal prices and the possibility for multiple equilibrium, it is necessary to be more precise about how $i$ is assigned. $i$ reflects the order of the firm in the price distribution. Therefore, there exists a positive monontonic relationship between $i$ and $\frac{p_{i,t}}{P_{t}}$. This also simplifies the existence of the defining integral. \footnote{Formally I have defined that $\Sigma_{t}$ contains a countable family of pure points representing firms interacting strategically together and another family of Borel sets which are continua of firms who take the aggregate economy as given corresponding to perfect or monopolistic competitions as in Calvo, Taylor and other New Keynesian models. The associated measure $\mu_{i}$ is positive as it reflects shares of goods in aggregate consumption- the existence of a discrete Lebesgue integral over the pure point sets follows immediately. \cite{roydenreal} proves that monotone functions on Borel sets in $\mathcal{R}$ possess Lebesgue integrals. This demonstrates that a measure $\mu_{t}$ exists over all sets in $\Sigma_{t}$.} $\Omega_{i,t}$ is the set of all prices in the economy at time $t$. $\Sigma_{t}$ is the family of sets of individual firms over which output can be aggregated. \footnote{In mathematical terms $\Omega_{t}$ is the measurable space of firms and $\Sigma_{t}$ is the smallest sigma-algebra which contains sets of firms that produce positive output and the empty set with no firms in it. The distinction between membership of $\Sigma_{t}$ and $\Omega_{t}$ is operative in the specification of many macroeconomic models with imperfect competition which use continuum of firms, for which countable subsets of prices would belong to $\Omega_{t}$ but not $\Sigma_{t}$. The two coincide in real life where the set of prices at a point in time is countable.}\\ Hence $p^{**}_{i,t}$ represents the price that a firm at the corresponding point in the price distribution would set if \emph{all} prices were flexible and information perfect as in the RBC framework. This differs from $p^{*}_{t}$ which features in staggered price setting models such as the Calvo model in Section 2 and Taylor contracting models in this section, where $p^{*}_{t}$ represents the price a firm with a price that is fully flexible today would set taking into account that other prices in the economy are rigid.The notion that rigidity of prices elsewhere in the economy causes flexibly reset prices to differ from those that would be set in a fully flexible world ($p^{**}_{i,t}\neq p^{*}_{i,t}$) is known as \emph{real rigidity} see \cite{ball1990real}. It is present in major New Keynesian models. For example in the Calvo model from Section 2- the relevant equation is (16). The RBC skeleton has a symmetric equilibrium so $p^{**}_{t}=P_{t}$ however the optimal reset price $p^{*}_{t}$ only equals $P_{t}$ when $\pi_{t}=0$ otherwise $p^{**}_{t} \neq p^{*}_{t}$ so there is real rigidity. It would be interesting to explore the link between these two concepts further.
To complete the generalization it is necessary to define two properties that DSGE models or more specifically their associated price distributions may possess. The first is \emph{aggregate nominal rigidity}
\begin{definition}An economy possesses aggregate nominal rigidity if there exists a measurable set of firms $i$ $\mathcal{B}^{1}\in \Sigma_{i,t}$ such that for some $l>0$ $p_{i,t}=\Phi_{i}(p_{i,t-l},.)$ and there exists an inflation rate $\bar{\pi}(\sigma_{t})$ such that $\pi_{t} \neq \bar{\pi}_{t}$ implies that $p_{i,t} \neq p^{**}_{i,t}$ and $\bar{\pi}(\{\emptyset, p\})$. Also it must be the case that $\forall l'$ where $0< l'<l$ we can write $p_{i,t-l'}=\Phi_{i,l-l'}(p_{i,t-l})$ where $\pi_{t-l'} \neq \bar{\pi}(\sigma_{t})_{t}$ implies $p_{i,t-l'}\neq p^{**}_{i,t-l'}$\end{definition}
The first part states that to have aggregate price rigidity there must be a positive fraction of output sold at a price that reflects past prices $l$ periods back and differs from those that would prevail in the flexible economy. There is an allowance that if inflation hits a certain value the two could coincide as would occur in Calvo contracting starting from no price dispersion starting from no price dispersion when inflation is zero or equal to a target $\bar{\pi}$ to which all prices are indexed either directly as in \cite{yun1996nominal} or to the last periods inflation as in \cite{smets2003estimated}\cite{christiano2005nominal}\cite{smets2007shocks}\footnote{With Taylor contracts we have to allow for the possibility that there may exist reset prices consistent with no inefficient price dispersion that differ over time. This is because prices replace one another under Taylor contracting see equation (49) so the price that sets $\Delta_{t}$ to one will depend on the price it is replacing in that case $p^{*}_{T-M}$.} The second part serves to ensure dependence between past and current price levels and therefore past and current levels of price dispersion. This means there cannot be aggregate price rigidity in an otherwise flexible economy just because there are backward-looking or cycling prices. This restriction has economic content. Many items are sold on temporary discount which upon expiry return to there old level. Fortunately, recent New Keynesian models that feature products on sale avoid this trap as they are able to generate changes in the frequency or size of discounts in response to monetary shocks that would be neutral in the models RBC skeleton- which implies purchases are made at $p_{t}\neq p^{**}_{t}$ see \cite{kehoe2008temporary}\cite{guimaraes2011sales}\cite{nakamura2011price}\cite{eichenbaum2011reference}\cite{malin2015informational}.\footnote{Note that as these papers do not tend to use log-linearization around a non-stochastic steady state to characterize business cycle dynamics. They are immune to the subsequent criticisms of this section. }
\\ As well as traditional RBC models, the definition of aggregate price rigidity excludes recent New Monetarist models. In this framework there is a distribution of prices motivated by a flexible price microeconomic model, it has become common to use a model with costly search such as \cite{burdett1983equilibrium} \cite{albrecht1984equilibrium} \cite{burdett1998wage} as these can provide a rational for agents to hold money if there are appropriate credit constraints see \cite{lagos2005unified}\cite{williamson2010new}. Their point is that provided the distributions of prices overlap from period to period it is possible for some firms not to change their price in equilibrium. Even though money will be exactly neutral because the market equilibrium in their model does not depend on the money supply. They claim therefore that price rigidity does not imply monetary non-neutrality. \\ Their claim is completely correct. However, this is because their model does not possess aggregate nominal rigidity. It has equilibria where individual firms choose to keep prices fixed because they do not care about their position in the price distribution by equilibrium construction. On the other hand, with my approach of indexing firms by their position in the distribution there is no nominal rigidity. At each point in the distribution the appropriate firm raises their price one-for-one with the money supply- so the aggregate price level is perfectly flexible. \\ It is necessary to impose one further condition on the price distribution this is called \emph{nominal heterogeniety}. This states that the degree of nominal distortion represented by the ratio between the actual price and the flexible model price $p_{i,t}/p^{**}_{i,t}$ must vary between firms.
\begin{definition}An economy possesses aggregate nominal heterogeniety if $\exists \mathcal{B^{1}}, \mathcal{B^{2}} \in \Sigma_{i,t}$ such that $\int_{\mathcal{B^{2}}} p_{i,t}/p^{**}_{i,t} \mathrm{d}{\mu_{i}}- \int_{\mathcal{B^{1}}} p_{i,t}/p^{**}_{i,t} \mathrm{d}{\mu_{i}}> 0$ \end{definition}
This restriction rules out stylized models such as \cite{barro1972theory}\cite{sheshinski1977inflation}
\cite{rotemberg1982monopolistic}\cite{mankiw1985small} where all firms set the same price motivated by physical costs to price changing that do not differ among firms, as argued earlier such models are often unable to generate sufficient nominal rigidity. One of their creators refers to them as "toy models" see FIND REFERENCE In any case physical costs of price changing vary substantially across firms and products see \cite{levy1997magnitude}- consistent with observed heterogeniety in the frequency of price adjustment as found in empirical studies such as \cite{dhyne2006price}\cite{dickens2007wages}\cite{dixon2012generalised}.
\\ Note that \emph{efficient} price dispersion is a \emph{relative} concept of efficiency. It compares the actual distribution of prices to a corresponding model with flexible prices, perfect information and profit maximization\footnote{With suitable modification of the RBC skeleton profit maximization could be relaxed to firm objective maximization to allow for among other factors risk aversion or behavioral factors as considered by for example \cite{jaimovich2007behavioral} \cite{choudhary2010risk}, provided it did not induce direct dependence between today's optimal price and past optimal prices conditional on other shocks and parameters in the model.}. The price dispersion that is efficient from the firms' point of view $\Delta^{*}_{t}$ could be inefficient from a social planner point of view because there are other inefficiencies in the economy (e.g. imperfect competition or asymmetric information) that cause welfare to fall below its social optimum. In fact it could be constrained (second-best) efficient to have inefficient price dispersion in order to help mitigate other uncorrected externalities. A prominent attempt to demonstrate this point is the burgeoning optimal inflation rate literature, where it is common to augment benchmark New Keynesian models capable of generating (inefficient) price dispersion, such as the basic Calvo model, with additional frictions in order to derive non-zero optimal inflation targets.
\footnote{The first friction studied was the existence of non-interest bearing money which brought the deflationary forces of the Friedman rule into place see \cite{khan2003optimal}\cite{adao2003gaps}\cite{schmitt2004optimal}\cite{schmitt2007optimal}. Subsequently, further imperfections in the product, goods and labor markets have been considered \cite{collard2005tax}\cite{pontiggia2012optimal}\cite{ikeda2015optimal}, along with a binding lower bound on nominal interest rates \cite{billi2011optimal}\cite{coibion2012optimal}\cite{eggertsson2013inflation}\cite{eggertsson2015dynamic}. As this literature only considers price dispersion owing to trend inflation and ignores the additional dispersion created by \emph{stochastic} shocks there will be an upward bias in reported optimal inflation.It is beyond the scope of this paper to quantify this omission.} In fact, as inflation implies inefficient price dispersion\footnote{This follows from Theorem 1. The example in Appendix B.2.1 where inflation corrects initial price dispersion does not apply to a Divine Coincidence framework as detailed in Section 4. Many of these 'second-best' considerations could arise in a model with stochastic price dispersion and would constitute interesting extensions of this paper. Although, as I characterize optimal policy in Section 5 by a different welfare standard- results would not be directly comparable.}, all existing attempts to resolve the 'Divine Coincidence' in Section 4 can be viewed through this lens also. Although, I show existing calculations are not correct.
\begin{definition}z\end{definition}
\begin{equation}z\end{equation}
\footnote{To allow for heterogeniety between firms we would have to redefine $\Delta$ to normalize each firms price relative to its optimal reset price.\cite{yun2011reconsidering}\cite{fuhrer2000habit}\cite{dennis2009consumption}\cite{ravn2010deep}\cite{givens2013deep}\cite{santoro2014loss}\cite{lewis2012firm}\cite{lewis2015entry}\cite{etro2015new}consider various alternative demand systems with a variety of motivations.}. I could even alter the source of the price dispersion from staggered to for example information-constrained price-setting \cite{mankiw2002sticky}\cite{mankiw2006pervasive}\cite{mankiw2007sticky}\cite{lorenzoni2009theory}\cite{lorenzoni2010optimal}\cite{nimark2008dynamic}\cite{nimark2014man}\cite{barrdear2015towards}\cite{adam2007optimal}\cite{mackowiak2008business}\cite{paciello2014exogenous} are papers where this price dispersion is present but not accounted for. All that is required is a motivation for firms to set different prices when in a flexible price world
it would be efficient if they all set the same.\footnote{Price dispersion would also come about where there are physical costs of price changing provided that firms face idiosyncratic shocks or differing adjustment costs. See for example \cite{gertler2008phillips}\cite{nakamura2008five}\cite{reiff2014menu}\cite{bouakez2009transmission}\cite{bouakez2014sectoral}. In these cases the relevant interpretation of the price dispersion variable $\Delta$ is the difference between the actual and flexible price. Suppose for example fixed adjustment costs of a price change varying across firms with an aggregate shock- $\Delta > 1$ will come about if some firms adjustment costs are below and some above the common adjustment threshold. Similarly, with common fixed adjustment cost but idiosyncratic shocks $\Delta > 1$ will occur if some firms keep their price constant because their idiosyncratic shock 'cancels out' the aggregate shock i.e. they remain inside their band of price inaction.}The behavior of price dispersion and its dynamics are integral to all the analysis which follows. All results that do not refer to a specific specification of sticky price setting (e.g. Calvo or Taylor) generalize to all models covered by this lemma.
\\ The fundamental mechanism in this paper is that inflation causes price dispersion. The most general statement that can be made is as follows. It assumes away nominal indexation. Consult Appendix B for an analagous result with nominal indexation in place.
\begin{theorem}With non-trivial price rigidity if $\pi_{t} \neq 0$ then $\exists t' \leq t$ such that $\Delta_{t'}>1$\end{theorem}
\begin{proof}The result is trivial if $\Delta_{t-1} > 1$ so assume $\Delta_{t-1}=1$. By hypothesis that $\pi_{t} \neq 0$ it follows that $P_{t} \neq P_{t-1}$. Now we know from lemma 2 that $\Delta_{t-1}=1$ implies that $\forall p_{i} \in \Omega_{t-1}$ $p_{i}=P_{t-1}$. Therefore $\pi_{t} \neq 0$ requires that $\exists p_{j} \in \Omega_{t}$ such that $\exists p_{j} \neq p_{i}$. The assumption of non-trivial price rigidity ensures that there exists $p_{k} \in \Omega_{t-1}, \Omega_{t}$ from the first part $p_{k}= p_{i}$, therefore from the second part for some $ p_{j} \in \Omega_{t}$ $p_{k} \neq p_{j}$ so by lemma 2 $\Delta_{t}> 1$.\end{proof}
Note that this general result cannot be tightened to link inflation to \emph{contemporaneous} price dispersion because this would not encompass models with Taylor contracts. The reason is that with Taylor contracts the price required to remove price dispersion can differ from that required to stabilize prices (zero inflation). This is because under Taylor contracts inflation is determined by a comparison between current reset prices and those they are replacing whilst price dispersion is determined by the difference between current reset prices and past reset prices that have \emph{not} been replaced. Intuitively, non-zero current inflation can cancel out past price dispersion. Under Calvo where old reset prices never disappear the zero price dispersion and zero inflation reset price coincide so the coexistence must be contemporaneous. Appendix A offers a simple numerical example to clarify these points.
Indeed, price dispersion persists even if the shock process generating it is not present in all time periods. The most general statement can be made in the context of the benchmark Calvo model set out in Section 2.
\subsection{Price Dispersion with Calvo Pricing}
\begin{remark}If inflation $\pi_{t}$ is ever non-zero in the Calvo model price dispersion $\hat{\Delta_{t}} > 0$ will exist in all subsequent periods. \end{remark}
The result follows simply from applying Lemma 2 and noting that the set of prices in the economy $\Omega_{t}$ includes every previous price. This is because the fraction of prices in the economy equal to a given reset prices $p^{*}_{t}$ never falls to zero matter how far into the future one moves since $\iota_{T}(p^{*}_{t})=\alpha^{T-t}(1-\alpha) > 0$ $\forall \;T > t$. This result has powerful implications for the class of equilibrium that can exist in a model with Calvo pricing. In particular it implies the current concept of equilibrium used in the literature the non-stochastic equilibrium does not exist in a Calvo model if it has ever had price dispersion.
\begin{definition} The behavior of the New Keynesian model from time $t$ can be represented\footnote{The arguments in Section 2 demonstrate a topological conjugacy to between $Z_{t}$ and the complete model- this topic is discussed in more detail in Section 3.6.} by the continuation path $\mathcal{Z}^{C}_{t}=\langle Z_{t},Z_{t+1}, \cdots \rangle$ where $Z_{t}=(\pi_{t},y^{e}_{t},\Delta_{t})$ which are governed by all the conditions set out in Section 2 apart from the policy rule equation (16).\end{definition}
The specific policy rule equation (16) is omitted to allow consideration of the constraints on policy rules in general. Here policy is represented implicitly by continuation paths $\{\mathcal{\pi}^{C}_{t}, \mathpzc{y}^{C}_{t}\}$- policy rules comparable to (16) but possibly time dependent could then by derived using the Euler equation (7).\footnote{For convenience I use raw output $y_{t}$ rather than the efficient output gap $y^{e}_{t}$ introduced in Section 2 to characterize policy. The two formulations are equivalent as however the output gap variable is defined there must be a one-to-one mapping between them in a non-stochastic world. I suspend discussion of what is a good definition of the output gap for the empirically relevant case of a stochastic model in Section 6(??).}
\begin{definition}A non-stochastic continuation path from time $t$ denoted $\mathcal{\bar{Z}}^{C}_{t}=\langle \bar{Z}_{t}, \bar{Z}_{t+1},\dotsc,\bar{Z}_{t+\tau}, \dotsc \rangle$ is where $\forall \tau \geq 0$ \; $\Pr(Z_{t+\tau})=\bar{Z}_{t+\tau}=1$, i.e. there is no uncertainty about future variables. \end{definition}
A \emph{non-stochastic continuation path} corresponds to a perfect foresight model with initial value $Z_{t}$. As $Z_{t}$ is dependent upon the shock carrying parameters from Section 2 $\Theta_{T}=(\psi_{T}, \varphi_{T}, A_{T})$ in a continuous fashion. The perfect foresight applies to the continuation of the shock processes also\footnote{Formally there is a non-stochastic continuation path for $Z_{t}$ if and only if there is a non-stochastic equilibrium for $\Theta_{t}$ with the \emph{only if} following from the continuous dependence and the \emph{if} following from the fact that $\Theta_{t}$ is the only source of uncertainty in the model.} $\mathcal{\Theta}^{C}_{t}=\mathcal{\bar{\Theta}}^{C}_{t}=\langle \bar{\Theta}_{t}, \dotsc, \bar{\Theta}_{t+1}, \dotsc, \bar{\Theta}_{t+\tau}, \dotsc, \rangle$ so $\Pr({\Theta_{t+\tau}})=\bar{\Theta}_{t+\tau}=1$ $\forall \tau \geq 0$.
A stronger concept is that of a \emph{stable non-stochastic continuation path} defined as follows.
\begin{definition} A stable non-stochastic continuation path denoted $(\mathcal{\bar{Z}^{*}})^{C}_{t}$ is a non-stochastic continuation path where the shock process are held constant $\Pr({\Theta_{t+\tau}})=\bar{\Theta}=1$ $\forall \tau \geq 0$ \end{definition}
This corresponds to a non-stochastic model with initial position $Z_{t}$. Finally the strongest concept is that of non-stochastic equilibrium path from $t$.
\begin{definition}A non-stochastic equilibrium pah from $t$ of a New Keynesian model denoted $(\mathcal{{\bar{Z}}^{**}})^{C}_{t}$ is a non-stochastic continuation path where $Z_{t+\tau}=\bar{Z}$ $\forall \tau \geq 0$ i.e. all the variables remain constant in all future periods for sure. \end{definition}
In other words a non-stochastic equilibrium is a fixed point of the system where every future variable is certain to be constant at its present period value forever. This is the basic solution concept in microeconomics and growth theory. It is natural macroeconomists wish to apply it to the New Keynesian model also. However, whenever price dispersion is possible this is not in general correct even in the extreme case of a non-stochastic continuation path\footnote{Formally, a non-stochastic continuation path is necessary but not sufficient for a non-stochastic equilibrium. The necessity follows from noting that otherwise expectation errors could lead actual and expected values to diverge so that $\Pr(Z_{t+\tau})=\bar{Z}< 1$ The non-sufficiency follows from the following counter-example.}.
\begin{definition}$\mathcal{Z}^{H}_{t}=\langle \cdots ,Z_{t-1}, Z_{t} \rangle $ denotes the history of the variable $Z$ up to time $t$ \end{definition} Let $\Delta(\bar{\pi})=\Delta(\mathcal{Z}^{H}_{t}=\langle \cdots ,\bar{\pi}, \bar{\pi} \rangle )$
\begin{proposition}In a model with Calvo pricing, a non-stochastic equilibrium $(\mathcal{{\bar{Z}}^{**}})^{C}_{t}$ with $pi_{T}= \bar{\pi}$ from time $t$ will only exist if $\Delta_{t}=\Delta(\bar{\pi})$. \end{proposition}
\begin{proof}The argument proceeds by contradiction suppose the converse then (21) takes the form of the following deterministic difference equation.
\begin{equation}\Delta_{T}=\Delta(\bar{\pi})+{\vartheta}^{T-t}(\Delta_{T}-\Delta_{t})\end{equation}
Where $$\vartheta=\alpha(1+\bar{\pi})^{\theta}$$ $$\Delta(\bar{\pi})=(1+\bar{\pi})^{\theta}\frac{(1+\bar{\pi}-\alpha)^{\frac{\theta}{\theta-1}}}{(1-\alpha)^{\frac{1}{\theta-1}}(1-\alpha(1+\bar{\pi})^{\theta}}$$ $\Delta(\bar{\pi})$ is the non-stochastic steady state price dispersion and $\vartheta < 1$. This restricion is a requirement to ensure that a steady-state exists if not $\Delta$ would grow without bounds which would cause consumption $C$ to tend to zero violating the transversality condition, equation (6). \footnote{The threshold inflation rate $\bar{\bar{\pi}}={\frac{1}{\alpha}}^{\frac{1}{\theta}}-1$. This restriction is well-known in the trend inflation literature. For developed countries it tends to be met quite easily see \cite{ascari2014macroeconomics}. I assume in this paper that it is always met for sure.}. Now it is clear that if $\Delta_{t}\neq \Delta(\bar{\pi})$ it means $\Delta_{T}$ is time dependent contradicting the definition of a non-stochastic equilibrium for $Z$ from time $t$. Note also that as $0< \vartheta < 1$ the economy is converging monotonically towards its non-stochastic steady state but does not reach in finite time. Therefore we know that $1 < \Delta_{T} < \max\{\Delta_{t},\Delta(\bar{\pi}) \}, \, \forall \, T > t$ \end{proof}
For the zero-inflation steady state the behavior of $\Delta$ simplifies considerably to $$\Delta_{t}=1-\alpha +\alpha \Delta_{t-1}$$
$$\Delta_{t}=\alpha^{t-t_{0}}\Delta_{t_{0}}> 1=\bar{\Delta}$$
In other words the persistent behavior of the backward-looking price dispersion term stops the forward-looking variables output gap and inflation reaching equilibrium. Note however that the effect of initial price dispersion decays away when inflation is kept constant since $\lim_{T \rightarrow \infty}\Delta_{T}=\Delta(\bar{\pi})\; \forall \Delta_{t}$. The (non-linear) Phillips Curve relation corresponding to (24)? ensures $y_{t}$ also has a limit (by the open mapping theorem once again) thus with a stable inflation policy the economy is ergodic $\lim_{t \rightarrow \infty}Z_{T}=\bar{Z} \forall Z_{t}$ The property that in the infinite limit a dynamical system "forgets" its initial position is called ergodicity. In sub-section 3.3(?), the stochastic analogue of this concept will be used to define a meaningful notion of dynamic stochastic general equilibrium (DSGE). THEOREM TRADE OFF IN HERE GO FOR MATHS TRADE OFF ENDOGENOUS PERSISTENCE\\BOLLOCKS!! Take $t> t_{0}$. Since $\pi_{t}=\bar{\pi}$, $y_{t}=\bar{y}$ is not in fact an equilibrium of the system because it does not conform with all the optimization and market clearing conditions that define the dynamical system laid out in
Section 2. To see this note that price construction equation (16) implies a each $\pi_{t}$ maps to only one reset price $\frac{p^{*}_{t}}{P_{t}}$ we can see this because the relationship between the optimal reset price and inflation is strictly monotonic- as $\frac{d \pi_{t}}{d (p^{*}_{t}/P_{t})}=\frac{1-\alpha}{\alpha}\frac{p^{*}_{t}/P_{t}^{-\theta}}{1+\pi_{t}}^{\theta-2}> 0$ Therefore the reset price will be constant will be constant in every period $\frac{p^{*}_{t}}{P_{t}}=\bar{\frac{p}{P}}$. By recursively solving the optimal reset price equation (15). This implies real marginal costs $\varphi_{t}=\bar{\varphi}$ will also be constant.From the marginal cost expression equation (12) with no technology shocks $A_{t}=\bar{A}$ the real wage must be constant $W_{t}=\bar{W}$. Now note that in equilibrium the market clearing condition implies when $\Delta$ decreases labor $L$ will decrease one-for-one which sets up a contradiction when we consider the optimal labor supply condition equation (8) by assumption the left hand side is constant but the right hand side must be decreasing- which completes the proof. Therefore persistence in the backward looking variable $\Delta$ will transmit to the other variables in the model. This mechanism is central to the analysis of the paper. Also note the following:
\begin{corollary}With Calvo pricing, there will always exist a non-trivial trade-off between inflation and output gap stabilization if there has ever been inflation variability.\end{corollary}
I have just shown that $\pi_{t}=\bar{\pi}implies y_{t}^{e} \neq \bar{y^{e}} \forall t> t_{0}$ if there was initial price dispersion $\Delta_{t_{0}} > 1$ this implies that $y_{t}^{e} = \bar{y^{e}}\forall t > t_{0}$ only if $\pi_{t} \neq \bar{\pi} \forall t> t_{0}$. The link with inflation variability is provided by Lemma 2 it does not matter when the price inflation variability took place because of the permanence of price dispersion result remark 1.
\\This is a profound result it demonstrates that in a Calvo model there exists a non-trivial trade-off between inflation and output stabilization in any non-degenerate stochastic environment. The profound theoretical significance of this result is discussed in detail in Sections 4- where its significance to optimal policy is derived.
\subsubsection{Price Dispersion with Taylor Contracting}
This sub-section extends the results from the Calvo model to the Taylor contracting framework. Several results change but the theme that price dispersion generates staggered adjustment of the economy to shocks is retained. Furthermore, assuming the economy jumps to non-stochastic equilibrium immediately has misleading implications for the behavior of price dispersion with implications for welfare.
\\ There is no analogue of remark 1 with Taylor contracts there is a maximum contract length so all contracts will eventually disappear from the price level- creating the possibility for price dispersion to disappear i.e. $\hat{\Delta}= 0$ if all the reset prices are the same for sufficiently long. Therefore Proposition 3 and corollary 2 apply only for as long as there exists prices set before $T=t_{0}$ that have not been reset.
I focus here on the case of simple Taylor Appendix ? covers the simple extension to Generalized Taylor economy of \cite{dixon2012generalised}. Consider a simple Taylor economy where contracts last for $M$ periods. There is staggered price adjustment so a fraction of firms $1/M$ are allowed to reset their price each period. In the knowledge that this price will remain fixed for exactly $M$ periods. Therefore the price level construction equation takes the form:
\begin{equation} P_{t}^{1-\theta}=\frac{1}{M}{p_{t}^{*}}^{1-\theta} + \frac{1}{M}{p_{t-1}^{*}}^{1-\theta} + \cdots + \frac{1}{M}{p_{t-(M-1)}^{*}}^{1-\theta}\end{equation}
Firms set their reset prices as a weighted average of real marginal costs over the course of the contract so:
\begin{equation}\max_{p_{t}(i)}E_{t}\sum_{T=t}^{t+M}Q_{t,T}[\frac{p_{t}(i)}{P_{T}}y_{T}-\varphi_{T} y_{T}]\end{equation}
Changes in the price level reflects the difference between the current reset price $p_{t}^{*}$ and the price it replaced $p_{t-M}^{*}$.
\begin{equation}P_{t}^{1-\theta}-P_{t-1}^{1-\theta}= \frac{1}{M}(p_{t}^{*})^{1-\theta}- \frac{1}{M}(p_{t-M}^{*})^{1-\theta} \end{equation}
from which can be derived the following expression for inflation
\begin{equation}(1+\pi)^{\theta-1}=1+ \frac{1}{M} (\frac{p_{t-M}^{*}}{P_{t}})^{1-\theta}- \frac{1}{M}(\frac{p_{t}^{*}}{P_{t}})^{1-\theta}\end{equation}
Now the evolution equation for $\Delta$ analagous to equation (19) in the Calvo model is
\begin{equation} \Delta_{t}= \frac{1}{M}(p_{t}^{*})^{-\theta}-(p_{t-M}^{*})^{-\theta}+ (1+\pi)^{\theta}\Delta_{t-1} \end{equation}
To see that a trade-off between inflation and output stabilization exists for the first $M-1$ periods I proceed by contradiction. Assume an equilibrium $Z_{t}=\bar{Z}$ exists from equation (45) you can see that to have a constant level of inflation $\pi_{t}=\bar{\pi}$ there must be a one-for-one relationship between $p_{t}^{*}$ and $p_{t-M}^{*}$. Given $\Delta_{t_0}> 1$ there must be at least one price in the period $t$ price level $p^{*}_{t-j}$ where $0 < j < M$ such that $p^{*}_{t-j}\neq p^{*}_{t}$. When this price comes to be replaced at $p^{*}_{t-j+M}\neq p^{*}_{t}$ the optimal reset price equation (43) requires that $W_{t} \neq W_{t-j+M}$. Now by hypothesis $y_{t}=\bar{y}$ so from equation (8) the real wage and labor supply must move in the same direction- however this means aggregate income has increased- which violates equation (9)- the condition that all income must be consumed. From period $t+M$ onwards the economy reaches non-stochastic equilibrium as $\Delta=1$. Therefore it takes precisely $M$ periods for the Taylor economy to transition to non-stochastic equilibrium. This equilibrium is efficient i.e. equal to the flexible price output.The non-stochastic system is therefore ergodic. The result extends easily to the Generalized Taylor set-up where the non-stochastic equilibrium is reached after $J$ periods- where $J$ is the length of the longest contract.
\subsubsection{Applications} It is common to represent a New Keynesian economy with staggered nominal adjustment as switching between alternate non-stochastic steady states with no dynamic adjustment. This sub-section has proven that this approach is erroneous. The two models to which this approach has been most commonly employed have featured binding zero lower bounds on monetary policy and switches in monetary policy regime. It has also been used to model major exchange rate devaluation and structural adjustment episodes \cite{uribe2012pegs}\cite{schmitt2012prudential}\cite{farhi2012fiscal}\cite{farhi2014labor}\cite{na2014model}\cite{eggertsson2012new}\cite{eggertsson2014can}.
\\ Zero lower bound models seek to operationalize Keynes' idea of a liquidity trap. The idea is that arbitrage with money, which yields a zero nominal interest rate, prevents the Central Bank from cutting nominal interest rates below zero. Therefore if there is a sufficiently large fall in aggregate demand such that the desired nominal interest rate falls below zero. This zero bound will bind such that the economy will be demand-constrained with inefficiently low output. Crucially, in a New Keynesian model because the representative consumer is forward-looking liquidity traps cannot be permanent or the consumption problem explodes and the transversality condition is violated\footnote{In an overlapping generation model with borrowers and savers this result does not apply as demonstrated by \cite{eggertsson2014model} although a permanent liquidity trap would appear implausible.}. The literature on zero lower bound models is voluminous. They have been used to explain large economic contractions following financial crises and study optimal monetary and fiscal policy responses that might mitigate or overcome the constraint of the zero bound on nominal interest rates \cite{corsetti2010debt}\cite{lorenzoni2011credit} \cite{eggertsson2012debt} \cite{farhi2013theory}\cite{corsetti2013sovereign} \cite{correia2013unconventional}\footnote{See also \cite{woodford2011simple} \cite{eggertsson2013inflation} \cite{benigno2014dynamic} \cite{denes2013deficits} \cite{eggertsson2004policy}\cite{eggertsson2006fiscal}\cite{eggertsson2011fiscal}\cite{eggertsson2009response}\cite{adam2007discretionary}\cite{werning2011managing} \cite{cook2011optimal}\cite{cook2011cooperative} \cite{cook2013sharing} \cite{araujo2013conventional}\cite{schmitt2014liquidity}.}. Unsurprisingly, such models have become extremely popular following the global financial crisis of 2008 and the subsequent spell of near zero short interest rates across major industrialized economies. \\The timing convention in these two steady state models is as follows the economy starts at non-stochastic equilibrium- then there is an unanticipated shock. The shock has to be unanticipated or inflation would fall as the time of the zero bound spell approached in anticipation of the future deflation. To clarify this behavior would not contradict proposition 1- which is derived under the assumption that there is no zero bound on nominal interest rates. Finally leaning on the forward-lookingness derived in proposition 1 the model is closed with the economy jumping back to its non-stochastic steady state. This amounts to assuming the liquidity trap is a one-off (or occurs with vanishingly small probability) but would be valid if the probability of the economy transitioning from the normal-times benchmark equilibrium to a liquidity trap were sufficiently small to make the expected deflation associated with future zero bound spells of an order of magnitude less than or equal to the squared term in the series expansion of inflation\footnote{Whether this alternative assumption is valid is difficult to gauge for two reasons. First it may be difficult to delineate zero bound spells- as their occurence may be sensitive to changes in the policy environment- for example \cite{eggertsson2008great} assumes the United States was characterized by a deflationary liquidity trap during the early phase of the Great Depression 1929-1933 on the grounds that under a stabilizing policy regime it would have been. However, in reality short term interest rates were significantly above zero throughout this time- which to me indicates the United States was not in a liquidity trap at this time. Similarly, world war 2 mobilization and financial repression measures designed to ease war financing makes it very difficult to ascertain the model consistent definition of the end of the zero bound spell- which is the point when private sector demand had fully recovered from the Great Depression shock see \cite{reinhart2012return}. Finally, it has proved challenging to explain the length of zero bound spells, simultaneous with the small fall inflation- which calls into the question the validity of any parameter estimates.}. This is wrong! All the above models employ Calvo pricing which means that even if the economy begins at a non-stochastic equilibrium- it will never reach another one either whilst the zero bound is binding or afterwards when the shock has been turned off. \\An alternative strategy has been to fix the length of the zero bound spell. The timing convention here is that the economy starts from non-stochastic equilibrium then experiences a shock known in advance to last $T$ periods. The model is again closed with the economy jumping straight back to non-stochastic equilibrium (absent policy changes) when the shock is turned off. There is no steady state whilst the zero bound binds, as resetters at different times face different length of zero bound spell relative to non-stochastic steady state equilibrium,so deflation will in fact moderate over the course of the zero bound spell. This approach is now the more popular \cite{cogan2010new}\cite{christiano2011government}\cite{amano2012risk}\cite{erceg2014there}
\cite{gertler2011model}\cite{gertlera2013qe}. \\However, the approach in these papers still falls foul of the results in this sub-section. The dynamics during the zero bound spells will be wrong because these papers all ignore the effect of the price dispersion associated with the deflationary shock implied by lemma 2 and corollary 1. Secondly, the economy will never return to steady state because of the assumption of Calvo pricing in each of these papers and corollary 2. \\Of course many results from these papers would stand up as they do not depend on a particular specification of inflation dynamics. For example, \cite{rognlie2014investment} \cite{korinek2014liquidity} show that liquidity traps generate deep recessions followed by recoveries even when the extreme old Keynesian assumption of no price changing during the liquidity trap is made in the context of a modern New Keynesian DSGE model. Indeed, the long-run properties will be unaffected as this because of the ergodicity properties of the Taylor and Calvo contract frameworks- remember I was able to demonstrate the former very easily in Subsubsection 3.1.2, the proof for Calvo will be given in SubSection 3.4.
\textbf{skeletal Intuition}
WHAT STANDS UP .... IMPROVEMENTS .... WEAKNESSES...
ZLB models M/S models Simsek / Schliefer / Lorenzoni ... wealth of other explanations ...
This has immediate implications for the recent class of liquidity trap models such as which use Calvo pricing and assume a deflationary steady state with Markov reversion to a non-stochastic steady state. The non-stochastic steady state will never be attained
\subsection{Stochastic Environment}
\begin{theorem}If the distribution $F_{T}(\pi_{T})$ is non-degenerate at zero (i.e. $Pr_{T}(\pi_{T}=0)\neq 1$) $\forall \, T \geq 1$ then price dispersion $E \hat{\Delta}_{T}=\ln(\frac{\Delta_{T}}{\Delta_{SS}})$ $\forall \,T \geq 0$ is first order even when evaluated relative to the non-stochastic steady state $\Delta_{SS} =1$ \end{theorem}
\begin{proof}$Pr(\pi_{T}=0)\neq 1$ then $Pr(p^{*}_{T} \neq P_{T})> 0$ from proposition 1 this implies $Pr(\Delta_{T} > 1) > 0$. This implies $Pr(\hat{\Delta}_{T} > 0) > 0$, proposition 1 also tells us that $\hat{\Delta}_{T} \geq 0$ therefore we can invoke Chebyshev's inequality to prove $E \hat{\Delta}_{T}> 0$.\end{proof}
\begin{proof} The result is trivial if $\hat{\Delta}_{0}>0$ so consider the alternative that $\hat{\Delta}_{0}=0$ and ${\Delta}_{0}=1$. The result follows quickly from establishing that when there is non-trivial price rigidity we can only have $\Delta_{t}=1$ and $\Delta_{t+1}=1$ if $\pi_{t+1}=0$. From Proposition 1 we know that $\Delta=1$ only if all prices in the economy are equal.Therefore in both periods the price level and all reset prices will be equal to a common price. Formally, $P_{t}= p^{*}_{t-a}=\bar{p}_{t}$ and $P_{t+1}= p^{*}_{t+1-a}=\bar{p}_{t+1}$ for all ages of price $a$. Now with non-trivial price rigidity there is some reset price $p^{*}_{t-j}$ for $j \geq 0$ which belongs to both price levels. Hence the two are equal and $\pi_{t+1}=0$. Therefore for $\pi_{T} \neq 0$ implies $\Delta_{T+1}> 1$ and $\hat{\Delta_{T+1}}> 0$ for $T \geq 1$. Hence if $Pr(p^{*}_{T} \neq P_{T})> 0$ then $$Pr(\hat{\Delta}_{T} > 0) > 0$$ and we know from proposition 1 that $\hat{\Delta}_{T} \geq 0$ therefore $E \hat{\Delta}_{T}> 0$ follows from a Chebyshev's inequality argument which I produced in the proof of proposition 1.
\end{proof}
The precise argument is the same as the one used to prove the first part of proposition 1 and is detailed in full in the appendix.
The reason for this result is an error in the microfoundations of the New Keynesian model. Essentially, you cannot have a New Keynesian model without dispersion. The critical observation is that the limiting case $\alpha=0$ belongs to the class of New Classical model such as \cite{lucas1972expectations} \cite{lucas1973some} \cite{sargent1976rational}
\begin{subsection}{A New Market Failure: Price Dispersion and the Firm}
\textbf{Just Intuition Currently}
\begin{proof}From proposition 1 we know that $E \Delta > 1 $\end{proof}
\begin{remark} Representing the economy by a linear perturbation around the non-stochastic steady state where $\Delta_{t}=1$. Is equivalent to positing the existence of a representative firm.\end{remark}
As we have seen in subsection 2.4 $\Delta=1$ its derivative $\hat{\Delta}=0$. Hence $\Delta = 1$ in the subspace in which we are taking the approximation. From theorem 1 this implies all firms set the same price and produce the same output. Therefore, suitably scaled up to the size of the economy any firm could function as a representative firm.
\begin{theorem}When $\Delta > 1$ a representative firm does not exist.\end{theorem}
\begin{proof} By definition the candidate representative firm produces aggregate output $Y$. From the production function it must demand labor $L=\frac{Y}{A}$. This must equal the aggregate of the net output vectors $(y_{i}, -l_{i})$ of individual firms, whose behavior is specified by all the equilibrium condition set out in Section 2.1-2.2. This means that:
$$ Y= AL= \Delta Y $$
which yields a contradiction when $\Delta > 1$.
\end{proof}1
The significance of this result is that it prevents us invoking the first fundamental theorem of welfare economics, as we can to prove the Pareto efficiency of the equilibrium in the benchmark RBC model. Indeed, as we usually work with models that yield unique solutions to both the social planner and market equilibrium problems, this result prevents us designing optimal tax schemes to decentralize social planner solutions. In this case even if we could put in place a tax structure to correct the static distortions implied by imperfect competition as described in we would still have market failure. Price dispersion creates a \emph{Keynesian inefficiency} that cannot be corrected by any non-state contingent tax system specified in advance of market trade because even unforecastable shocks that change the optimal reset price cause inefficiency \footnote{\cite{correia2008optimal} avoid this inefficiency by making the price level non-stochastic, which means the optimal reset price will never change in response to news.}\footnote{In fact, markets meet continuously but tax changes operate at a considerable lag. In this case the inefficiency result here retains trivially see Section 8 ??? In reality governments do not design tax policies with a view to correcting the effects of imperfect competition this is usually delegated to competition authorities with more modest aims who use price regulation and other measures rather than taxation.}. This result encompasses and explains \cite{alves2014lack} who uses positive trend inflation to create price dispersion and notes the Pareto inefficiency ('lack of divine coincidence') and \cite{damjanovic2010relative} who show that trend inflation acts like a negative productivity shock when added to a benchmark New Keynesian Phillips Curve like the one in Section 2.2. The explanation is that the price dispersion created by trend inflation is creating a wedge between the efficient representative firm of the zero inflation steady state Phillips curve and the inefficient aggregate output associated with price dispersion.
\\The approximation is invalid. Intuitively, you cannot approximate the behavior of an economy characterized by a certain type of market failure by studying its behavior around an improbable limiting case where this market failure ceases to exist. In a model where price dispersion is not allowed, constraints on price-setting cannot bind.
\end{subsection}
\begin{subsection}{Lucas Critique and the New Keynesian Model}
In 1976 Robert Lucas published his eponymous critique \cite{lucas1976} which showed that an empirical relation representing a trade-off of interest to a policymaker might break down when the objectives or instruments available to that policymaker changed\footnote{The Lucas critique is well understood in policy circles where it goes by the name Goodhart's law after policy adviser Charles Goodhart's adage that: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for policy purposes." \cite{goodhart1984problems}}. The result is widely seen as the seminal paper in modern macroeconomics as commented by numerous interviewees in \cite{snowdon2005modern}. It was instrumental to the award of the 1995 Nobel Prize \cite{lucas1996nobel}. \\He illustrated this concept with two applications to fiscal policy demonstrating how econometric relationships derived under one policy regime would break down under another. This finding was used to correctly predict that the marginal propensity to consume out of a temporary tax rebate would be significantly lower than indicated by a regression of consumption on disposable income where changes in \emph{permanent income} provided a substantial portion of the observed variation in disposable income see \cite{blinder1981temporary}\cite{lusardi1996permanent}\cite{taylor2009lack}\cite{auerbach2009activist}\cite{taylor2011empirical}\cite{parker2013consumer}\cite{kaplan2014model} among a vast literature. With the Phillips curve (the focus here) he shows that the slope of a simple New Classical Phillips curve will flatten as monetary policy becomes more active. \footnote{This part of the paper is actually less original it is based on his earlier paper \cite{lucas1973some} and is implicit in all formulations of the natural rate hypothesis whether or not rational expectations are imposed.} This point gained popular traction as it offered an explanation for the breakdown of the Old Keynesian Phillips curve relation between inflation and unemployment \cite{blackburn1992business} \cite{ravn1995stylized} \cite{benati2006uk}. \footnote{Whether Lucas' explanation was correct remains a point of contention in the literature see \cite{blinder19811971}\cite{taylor1999historical}\cite{ireland1999does}\cite{orphanides2003quest}\cite{lubik2004testing}\cite{coibion2011monetary}\cite{blinder2012supply} \cite{needham2015britain} for various evidence and perspectives.} Formally, a statistical relationship is said to pass the Lucas critique if its parameter set $\mathfrak{B}$ is independent of the parameters of the set of policy parameters $\mathfrak{P}$. This is an incredibly restrictive criterion. It requires complete monetary neutrality a property which no monetary model Keynesian or Classical can meet.\footnote{The only model that would satisfy this condition with respect to the parameters of the monetary policy is a pure real business cycle model with no money holding for its own sake, perfect information and full flexibility in prices, wages and money holding. However, this neutrality result is sufficiently trivial that such models are never used for monetary policy analysis and the money sector is always left out take for example \cite{kydland1982time}\cite{long1983real}\cite{plosser1989understanding}\cite{king1999resuscitating}.} For example in the RBC framework both money-in-the-utility function and cash-in-advance constraints induce non-neutrality in response to unexpected monetary disturbances. This causes a dependence between the parameters of the statistical relationship between monetary and real variables and the structural parameters of the monetary policy process- determining the magnitude of the monetary disturbances \cite{sidrauski1967rational} \cite{clower1967reconsideration}\cite{lucas1987money}\cite{cooley1989inflation} \footnote{Consult \cite{walsh2010monetary} for a textbook exposition of these models.}. In New Monetarist models money is not super-neutral because the trend rate of inflation can directly affect the technology of exchange \cite{williamson2010new}. In the New Keynesian tradition it is known that the trend rate of inflation effects both the business cycle dynamics and the long-run equilibrium \cite{ascari2014macroeconomics}.\footnote{These authors follow the convention of using the non-stochastic steady state as their equilibrium concept- which I show to not be the best way to characterize a dynamic stochastic equilibrium. However, the results would carry over naturally to the central moments of the ergodic distribution- using the arguments derived in \cite{stokey1989recursive}??? } In fact I am able to extend this result to derive a mapping between the parameters of monetary policy reflecting the Central Banks policy preferences and the reduced form relationships among all variables- a result not previously possible because of the 'Divine Coincidence' discussed in Section 5. \\Lucas was aware of the stricture of his critique. In the paper's conclusion he suggests two approaches that have proved fruitful. The first is to effectively circumvent the critique by arguing the economy has a vector of hyper-parameters $\bar{theta}$ that are invariant to policy. This simplest way to be operationalize this would be through a random coefficient model where $\theta_{t}=\bar{\theta}+\epsilon_{t}$ with $\epsilon_{t}$ distributed independently from the regressors and the true shocks to the model as well as having a bounded first moment- the model can be estimated consistently- \cite{snijders2011multilevel}.\footnote{Time-varying parameter models remain popular in the forecasting literature where out-of-sample prediction tests and econometric devices such as principal component and shrinkage priors are now used to reduce much of the over-fitting common in Lucas' time when data sets were shorter and computational arsenals weaker see \cite{meese1983empirical}\cite{tashman2000out} \cite{stock2002macroeconomic}\cite{bernanke2003monetary} \cite{korobilis2013assessing} \cite{koop2013large}\cite{karlsson2013forecasting}\cite{giannone2015prior}. They have recently been applied to policy analysis often in the context of full DSGE models see for example \cite{stock2002macroeconomic} \cite{cogley2005drift}\cite{primiceri2005time}\cite{boivin2006has}\cite{ireland2007changes}\cite{perron2009let}\cite{korobilis2013assessing}. The limitation with these models is that only a very restricted set of the parameters are allowed to vary or strong priors are imposed. This becomes particularly problematic with large models such as \cite{smets2003estimated}\cite{smets2007shocks}\cite{schmitt2007optimal}\cite{christiano2005nominal} including recent models emphasizing financial frictions like \cite{christiano2014risk}\cite{del2015inflation}where the availability of financial data restricts the estimation period and makes incorporating parameter heterogeneity impractical. Whereas, with the parsimonious model I derive here it should be much easier to incorporate heterogeneity in the basic structural parameters without overfitting.} In a similar spirit are models with stochastic regime-switching discussed earlier. The near ubiquitous practice of filtering also helps remove the effect from unsystematic policy change (EXAMPLE HERE?). The second approach is quantitative policy simulation- where a dynamic stochastic general equilibrium model is simulated under different policy parameters- see \cite{kydland1996computational} for an exposition. The strong assumptions that agents know the true structural parameters has been relaxed in an extensive learning literature.\footnote{see for example \cite{marcet1989convergence} \cite{bullard2016stability} \cite{evans2003adaptive}\cite{gaspar2006adaptive}\cite{milani2008learning}\cite{bianchi2013regime}\cite{matthes2015tales}\cite{cogley2015optimized}} \\
Since all useful models fail the Lucas critique it is necessary to develop a new criterion to officiate which econometric relationships are "useful for policy" in the sense of being \emph{representative} of the \emph{structural model} of the economy and consistent with the two agreed types of good practice. To do so I define some terms two economic and four mathematical. The first is an structural model. A structural model is a system of relationships representing the behavior of the economy and the response of agents to well-defined incentives. I have in mind DSGE models although the definition is fully general comprising non-stochastic as well as partial equilibrium models, instances of multiple equilibrium and even systems that do not possess well-defined equilibrium at all.\footnote{Limitations of space prevent me from adequately reviewing these literatures here. \cite{azariadis2005poverty} reviews early application of multiple equilibrium models in a growth context \cite{brito2010local} \cite{antoci2011poverty} \cite{kikuchi2009endogenous} are more recent. Business cycle models of non-linear dynamics include \cite{masson1999contagion}\cite{benhabib2002avoiding}\cite{benhabib2002chaotic}\cite{kaas2005imperfectly}\cite{benhabib2005design}\cite{guegan2009chaos}\cite{brito2013non} many admit chaotic dynamics which mean that the economy will never reach equilibrium. Limitations of chaotic modeling are discussed later. Most of these models do not possess the equilibrium concept (ergodic distribution) that is the focus of analysis here. } \begin{definition}A structural model $\mathcal{S}$ consists of two vector-valued time series processes- the endogenous variables $\{Z_{t} \}$ and errors $\{\mathpzc{u}_{t}\}$ defined respectively on state spaces $\mathcal{Z}$ and $\mathpzc{U}$ along with a parameter vector $\gamma \in \Gamma$. The path of the dynamical system is determined by the continuous function $F: \Gamma \times \mathcal{Z} \times \mathpzc{U} \rightarrow \Gamma \times \mathcal{Z}$ such that $F=(I_{\Gamma},f)$ where $I_{\Gamma}$ is the identity map on $\Gamma$ and $E_{t}Z_{t+1}=f(\gamma, Z_{t},\mathpzc{U}_{t})$. \end{definition}
The focus on continuity is to admit the application of topological principles. Be aware that continuity on a metric space is a weaker notion than $\epsilon - \delta$ continuity familiar from introductory real analysis courses. Recall that a \emph{continuous function} has the property that the inverse image of every open set is open\footnote{For example the identity map is continuous because the inverse image of every open set is itself.}. This is an axiom in point-set topology. Readers may want to consult \cite{ok2007preliminaries} for a proof that this holds for the family of open-intervals under continuous mappings
between Euclidean spaces. The $\sigma$ algebra (the family of measurable sets) of a metric space can include (a countable subset of) discrete mass points as well as open intervals as with the familiar Lebesgue measure. For example any measurable function between discrete metric spaces is continuous because the topology is discrete.
In this case since the identity map is continuous and the Cartesian product of continuous functions is continuous if and only if each function is continuous. Continuity of $F$ simply requires that $f$ be continuous in $\mathpzc{U}_{t}$ for all $(Z_{t},\gamma) \in \mathcal{Z} \times \Gamma$ with respect to the topology $\mathpzc{U}$ and $Z_{t}$ for all $(\gamma,\mathpzc{U}_{t}) \in \Gamma \times \mathpzc{U}$ and so on. Consult \cite{waldmann2014topology} for proofs.
In an analagous fashion I define an \emph{econometric} model
\begin{definition}An econometric model $\mathcal{E}$ is comprised of two vector-valued processes- endogenous variables $\{ {V}_{t} \}$ and shock process $\mathpzc{e}_{t}$ with state spaces $\mathcal{V}$ and $\mathpzc{E}$ respectively along with identified parameter vector $\theta \in \Theta$. The path of the dynamical system is determined by the continuous function $G: \Theta \times \mathcal{V} \times \mathpzc{E} \rightarrow \Theta \times \mathcal{V}$ where $G=(I_{\Theta},g)$ and $E_{t}V_{t+1}=g(\theta, \mathcal{V}_{t}, \mathpzc{e}_{t} )$ and the transition function \end{definition} It admits calibrated as well as estimated models. The requirement for identification (as defined formally in Section 2) corresponds to the possibility for conducting hypothesis tests about $\theta \in \Theta$ using statistics of $\{ V_{t} \}$. Here $V_{t}$ are observed from aggregate data or calculated from aggregate observed variables as stipulated by the econometric model. For example, the three equation New Keynesian model of Section 2 is an econometric model in which it is calculated from the observed variables $(\pi_{t}, y_{t})$ that $\hat{\Delta}_{t}=0$ $\forall t$ by log-linearization about the non-stochastic steady state so $V_{t}=(\pi_{t}, y_{t},\hat{\Delta}_{t})=(\pi_{t}, y_{t},0)$. \\The first two mathematical terms relate to vector spaces and the second pair come from topology. Consider a vector $\textbf{x}=(\textbf{x}_{1}, \dotsc , \textbf{x}_{n} )$ a subvector $\textbf{x}^{s}$ whose elements are a subset of those of $\textbf{x}$ formally $\textbf{x}^{s}=(\textbf{x}_{j_{1}}, \dotsc , \textbf{x}_{j_{k}} )$ where $\{ j_{1},\dotsc , j_{k} \}\subset \{ 1,\dotsc , n \}$\\ A \emph{homeomorphism} is a function $h: X \rightarrow Y$ between two topological spaces\footnote{Recall that a \emph{topological space} is a collection of open sets containing all countable union and finite intersection of these open sets. An open set is an abstract concept in topology so any set fulfilling these properties can be defined as open. Wherever convenient the conventional open intervals in Euclidean space will be used as open sets. This is the so called \emph{usual topology} on $\mathbb{R}^{n}$. } $(X,T_{X})$ and $(Y,T_{Y})$ with the following properties \begin{enumerate}[i]\item $h$ is a bijection\footnote{Recall that a bijection is a correspondence that is \emph{injective} (one-to-one) so $$\forall \,a,\, b \in X \;h(a)=h(b)\; \Rightarrow a=b $$ and \emph{surjective} (onto)$$\forall \, y \in Y \; \exists x \in X, \;h(x)=y$$ so each element of $Y$ has a corresponding element of $X$.} \item $h$ is continuous \item $h^{-1}$ is continuous \end{enumerate}Two functions $g_{1}: X \rightarrow X$ and $g_{2}: Y \rightarrow Y$ are \emph{topologically conjugate} if there exists a homeomorphism such that $g_{1}\circ h=h \circ g_{2}$ where $\circ$ indicates a composite function so $g_{1}\circ h=g_{1}(h(x))$. \\ Topology is the branch of mathematics that deals with shape and the relations between neighborhoods of points. Homeomorphsim is the equivalence relation in topology so two spaces are \emph{topologically equivalent} if and only if a homeomorphism can be found between them. Two functions are topologically conjugate if there is a homeomorphism between their images. Topology is an extension of Euclidean geometry. If a topological space is a geometric object then a homeomorphism is a continuous stretching or bending of that object with gluing and cutting forbidden. For two topological spaces to be homeomorphic one must be able to map from each to the other without mapping to the same point twice. \\ Topological conjugacy has been used extensively in the study of dynamical systems to explain why it is legitimate to approximate a non-linear systems by a linear approximation local to a hyperbolic fixed point\footnote{Recall that a hyperbolic fixed point is an equilibrium of a dynamical system with no center manifold. For a differentiable system a fixed point is hyperbolic if and only if its Jacobian (evaluated at the fixed point) has no eigenvalues with zero real part in continuous time and the unit circle for the analagous discrete time matrix. The discrete time case will be covered in more detail later in this section. For more detail, extensions and exposition readers should consult \cite{arnold2013random}.}. The seminal result the Grobman-Hartman theorem \cite{grobman1959homeomorphism}\cite{hartman1960lemma} and its extensions theorem will be discussed in greater detail later on in this section. My argument is that topological conjugacy can be used to precisely specify what it means for an econometric model to \emph{pass} the Lucas critique with respect to the arguments set out in \cite{lucas1976econometric}. I shall refer to this condition here as the \emph{Extended Lucas critique}. \footnote{\cite{acemoglu2008introduction}, \cite{de2000mathematical}, \cite{stachurski2009economic}}
\begin{definition}An econometric model $\mathcal{E}$ governed by the function $F$ passes the \emph{Extended Lucas critique} with respect to the ambient an structural model $\mathcal{S}$ governed by the function $G$ if $F$ and $G$ are topologically conjugate. \end{definition}
with parametization family $\Gamma$ and shock set $u^{S}_{T}$if it fulfills two conditions:\\
\underline{Parameter Mapping}
There exists a subset $\Gamma^{0} \subset \Gamma$ with mapping $\digamma$ where $\digamma:\Gamma^{0} \rightarrow \Lambda $
\underline{Topological Conjugacy}
There exits a homeomorphism between the predicted values of the endogenous variables $Z_{t+1}$ (corresponding to $u^{S}_{t}=\bar{u}^{S}$ ) given their current realization $z_{t}$ under the structural model $\mathfrak{S}$ and their corresponding prediction under the econometric model $\mathfrak{E}$ (corresponding to $u^{E}_{t}=\bar{u}^{E}$ ) for all parametization $\gamma \in \Gamma$.
A homeomorphism is a continuous function with a continuous inverse. Let $Z_{t}$ It follows immediately that any equation of a DSGE model passes this critique with respect to the DSGE itself through the identity map on $\Gamma^{0}=\Lambda$.
\end{subsection}
\begin{subsection}{Solving for a Phillips Curve}
\end{subsection}
\Large{\textbf{Appendix}}
\appendix
\normalsize
\section{Results from Subsection 2.6}
This section presents a more details of the derivations related to the persistence problem with the benchmark New Keynesian model, alongside several extensions which indicate the robustness of the puzzle.
\subsection{Eigenvalues and Convergence}To begin with the expression for the eigenvalues of the matrix $A$ is as follows.
$$\lambda_{1}=\frac{\sigma^{-1}(a_{y}+\omega \beta^{-1})+1+\beta^{-1}-\sqrt{{[\sigma^{-1}(a_{y}+\omega\beta^{-1})+1+\beta^{-1}}]^{2}-4\beta^{-1}(1+\sigma^{-1}(a_{y}+\omega a_{\pi}))}}{2}$$
$$\lambda_{2}=\frac{\sigma^{-1}(a_{y}+\omega \beta^{-1})+1+\beta^{-1}+\sqrt{{[\sigma^{-1}(a_{y}+\omega\beta^{-1})+1+\beta^{-1}}]^{2}-4\beta^{-1}(1+\sigma^{-1}(a_{y}+\omega a_{\pi}))}}{2}$$
Note that both are positive and the larger eigenvalue $\lambda_{2}$ is always greater than one. We need both to be outside the unit circle for unconditional convergence. In the case where the discriminant term under the square root is negative the solution takes the form $x_{t}=e^{-\gamma t}(A cos(zt)+Bsin(zt))$ which will converge non-monotonically when the real part of both eigenvalues is outside the unit circle $(\gamma > 0)$ see for example\cite{simon1994mathematics} for an exposition of this case. The econometric exercise in the following sub-section indicates that the conditions for determinacy and real roots are both met for the majority of calibrations and when the model is estimated for US data.
\subsection{Full Solution for Benchmark Model}
The coefficients for inflation are $$\zeta_{\pi}^{1}=\frac{-\sigma \beta \omega^{2}(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)})}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
$$\zeta_{\pi}^{2}= \frac{\beta \omega^{2}(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)}) }{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
$$\zeta_{\pi}^{3}=\frac{\sigma\omega[(\beta \lambda_{2}-1)\lambda_{1}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{2}^{-(1+i)}]}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
For output we have
$$\zeta_{y}^{1}= \frac{\sigma \beta \omega [(\beta \lambda_{2}-1)\lambda_{2}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{1}^{-(1+i)}]}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
$$\zeta_{y}^{2}=\frac{- \beta \omega [(\beta \lambda_{2}-1)\lambda_{2}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{1}^{-(1+i)}]}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
$$\zeta_{y}^{3}=\frac{\sigma(\beta \lambda_{1}-1)(\beta \lambda_{2}-1)(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)})- - \omega [(\beta \lambda_{2}-1)\lambda_{2}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{1}^{-(1+i)}]}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
Finally for interest rates
$$\zeta_{i}^{1}=\frac{-a_{\pi}\sigma \beta \omega^{2}(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)})+a_{y}\sigma \beta \omega[(\beta \lambda_{2}-1)\lambda_{2}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{1}^{-(1+i)}]}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
$$\zeta_{i}^{2}=\frac{a_{\pi}\beta \omega^{2}(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)})-a_{y}\beta \omega [(\beta \lambda_{2}-1)\lambda_{1}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{2}^{-(1+i)}]}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
$$\zeta_{i}^{3}=\frac{\splitfrac{a_{\pi}\beta \omega [(\beta \lambda_{2}-1)\lambda_{1}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{2}^{-(1+i)}]+a_{\pi}\omega^{2}(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)})}{+a_{y}\sigma (\beta \lambda_{1}-1)(\beta \lambda_{2}-1)(\lambda_{2}^{-(1+i)}-\lambda_{1}^{-(1+i)})-a_{y}\omega (\beta \lambda_{2}-1)\lambda_{2}^{-(1+i)}-(\beta \lambda_{1}-1)\lambda_{1}^{-(1+i)}}}{\sigma \beta^{2}(\lambda_{2}-\lambda_{1})}$$
None of the three interest rate coefficients can be decisively signed which explains the counter-intuitive or uncertain signs of all other coefficients.This confirms the interpretation in the text that the error terms do not correspond with our intuition of what a shock is when policy responds to contemporaneous variables. For example, $\zeta_{\pi}^{1}<0$ implies that a preference shock which causes the household to move consumption from the next period to the present actually causes present consumption to \emph{fall}. This is a consequence of the increase in real interest rate associated with the Taylor principle.
When there are repeat eigenvalues the form of the solution is different. The repeat eigenvalue is $\lambda=\sigma^{-1}(a_{y}+\omega \beta^{-1})+1+\beta^{-1}$ as many terms are zero in the expansion we find that see \cite{halmos1958finite} $$A^{-(1+i)}=-i\lambda^{-(1+i)}+(1+i)\lambda^{-i}A^{-1}$$. General solution equation (37) then allows us to calculate the $zeta$ coefficients.
\subsection{Persistence with Forward-Looking Policy}The lack of persistence finding generalizes naturally to a class of forward-looking policy stance, which includes the infinite horizon loss functions studied in optimal policy Sections 5 and 6 but also allows for differing weights on output and inflation objectives, policy horizons and discounting behavior. I require the following restrictions on every admissable loss function $L$ $\in$ $\mathcal{L}$ the family of admissable loss functions.
\begin{assumption} All moons are made of cheese!!! \end{assumption}
\subsection{Persistence in Generalized Taylor}
\subsection{Persistence with Capital Goods}
\section{Proofs from Section 3}
This item proves the three propositions about price dispersion in the stochastic New Keynesian model mentioned in Sections 2 and 3.
\subsection{Proof of Lemma 2}
\begin{proof}The proof that $\Delta \geq 1$ is an application of Jensen's inequality. First define two functions. $$g(p_{i})=p_{i}*(\frac{p_{i}}{P})^{-\theta}$$ $$\phi(p_{i})=(\frac{p_{i}}{P})^{\frac{\theta}{\theta-1}}$$ We need to assign a probability measure for the prices. Any non-singular measure assigning zero probability ${p_{i}=0}$ at every history will suffice. Note that the construction of the price level ensures that then $P>0$ with probability one. Therefore, we know that $\phi$ is strictly convex on every measurable set since $\frac{d^{2}\phi}{d{p_{i}}^{2}}=\frac{\theta}{(\theta-1)^{2}}\frac{\phi(p_{i})}{p_{i}^2}>0, \: \forall p_{i} > 0
$. Although in the first part of the proof I will only use the weak convexity property.\\Note that $P=\int_{\Omega} \; g \; \mathrm{d}\mu$. Now since $\phi$ is a convex function defined on a metric space it follows from theorem 7.12 (p. 265) in \cite{aliprantisborder} that it has a sub-derivative at every point. Hence we may select exists $a$ and $b$ such that:
$$ap^{*}+b \leq \phi(p^{*})$$
For all possible reset prices $p*$ and for the particular value $p^{*}=P$ $$aP+b=\phi(P)$$
It follows that: $$\phi \circ g(p^{*}) \geq ag(p^{*})+b$$ For all $p^{*}$ since we have a probability measure the integral is monotone with $\mu(\Omega)=1$. Note that: $$\Delta= \int_{\Omega}\phi \circ g \; \mathrm{d}\mu$$ $$\geq \int_{\Omega} (ag+b) \; \mathrm{d}\mu$$
$$=a\int_{\Omega} g \; \mathrm{d}\mu + \int_{\Omega} b \; \mathrm{d}\mu$$ $$=aP+b$$ $$=\phi(P)$$ $$=1$$\end{proof}
Where I have used respectively the monotonicity of the Lebesgue integral, the linearity of the Lebesgue integral and the definition of the functions see\cite{roydenreal}(p80-82).
\\ For the second part we need to be clear about the nature of the measure used. It is a discrete measure corresponding to price dispersion statistics defined at each time $t$. As a probability measure it is defined by the triplet $(\Omega, \Sigma, \mu)$ where $\Omega$ is a probability space $\Sigma$ is a sigma-algebra of sets and $\mu$ is a probability measure defined for every set in $\Sigma$. $\Omega_{t}$ is the set of all prices in the economy at time $t$. $\Sigma_{t}$ is the set of all subsets (or power set) of $\Omega_{t}$ denoted $\Sigma_{t}= \mathcal{P}(\Omega_{t})$ and $\mu_{t}$ is the share of particular prices in the economy at time $t$. To generalize the result across models simply modify the probability measure. The definition of $\Sigma_{t}$ will be the same for all discrete time models but $\Omega_{t}$ and $\mu_{t}$ will change. For the Calvo model without indexation used here they are as follows:
$$\Omega_{t}= \{\cdots,p_{-1}^{*},p_{0}^{*}, p_{1}^{*}, \cdots , p_{t}^{*} \}$$
$$\mu_{t}(p)=\Sigma_{-\infty}^{t}\delta_{T}(p) \alpha ^{t-T}(1-\alpha)$$
Where $p_{T}^{*}$ indexes the reset price at time $t$ and $\delta_{T
}$ is the indicator function for time $T$. Defined as $\delta_{T}(p)=\begin{cases} 1 & \mbox{if} \: p=p_{T}^{*} \\ 0 & \mbox{otherwise.
}\end{cases}$ $\forall \: 0 \leq
T\leq t$
Note that the incongruous feature of having an infinite history of reset prices is not necessary to prove Lemma 1 for the Calvo model- the result would pass with a finite history of reset prices starting at $p_{0}^{*}$ with the measure $\mu_{t}(p)=\Sigma_{T=1}^{t}\delta_{T}(p) \alpha ^{t-T}(1-\alpha)+\delta_{0}(p)\alpha^{t}$. However, the proof of the existence of the stochastic steady state in Theorem ??? does rely on an infinite history i.e. allowing the limit $t-T \rightarrow \infty$. As do subsequent generalizations.
\\For the \cite{yun1996nominal} model where the Calvo pricing firms not allowed to re-optimize index to trend inflation $\bar{\pi}$ they are:
$$\Omega_{t}=\{\cdots, (1+\bar{\pi})^{t+1}p_{-1}^{*}(1+\bar{\pi})^{t}p_{0}^{*},(1+\bar{\pi})^{t-1}p_{1}^{*}, \cdots , p_{t}^{*} \}$$
$$\mu_{t}(p)=\Sigma_{-\infty}^{t}\hat{\delta}_{t,T}(p) \alpha ^{t-T}(1-\alpha) $$
Where $\hat{\delta}_{t,T}(p)=\begin{cases} 1 & \mbox{if} \: p=(1+\bar{\pi})^{t-T}p_{T}^{*} \\ 0 & \mbox{otherwise.
}\end{cases}$ $\forall \: 0 \leq
T\leq t$
\\For the Generalized Taylor Economy:
$$\Omega_{t}= \{\cdots, p_{t-(J-1),J}^{*}, p_{t-(J-2),J}^{*}, p_{t-(J-2),J-1}^{*}, \cdots, p_{t-1,2}^{*}, p_{t-1,3}^{*}, \cdots, p_{t-1,J}^{*}, p_{t,1}^{*}, p_{t,2}^{*}, \cdots , p_{t,J}^{*}\}$$
$$\mu_{t}(p)=\sum_{j=1}^{J}\sum_{k=0}^{j-1}\delta_{t-k,j}\gamma_{j}/j$$
Note that the second subscript now indicates contract length and $J$ is the maximum contract length. $\gamma_{j}$ is the share of firms with contract length $j$. There is staggered pricing for each contract length so $1/j$ is the fraction of firms with contract length $j$ resetting at a given time. The need to generate staggered nominal adjustment within each sector is the reason why we cannot begin the contract history at the date $t=0$. Note that unlike with Calvo there is not a single reset price in each period but one for each contract length $1$ through $j$.
\\Finally, other models of state dependent pricing can be admitted trivially by making parameters in the measure dependent on the parameters and history of shocks.
\\To derive the conditions for $\Delta > 1$ we need to employ the strict convexity property of $\phi$ which is as follows:
$$\phi(sp+ (1-s)p') < s\phi(p)+(1-s)\phi(p')$$ for any $s$ in (0,1) and $p \neq p'$.
Select any $p'$ we know from its weak convexity property that $\phi$ has a sub-derivative at $p'$ so:
$$ap+b\leq \phi(p)$$
$$ap'+b=\phi(p')$$
Now here begins a contradiction argument. Suppose another point $p''$ had the same sub-derivative then:
$$ap''+b=\phi(p'')$$
Now consider a convex combination $p'''=sp'+(1-s)p''$. As $\phi$ is strictly convex we know that:
$$s\phi(p') + (1-s)\phi(p'') > \phi(p''')$$ However, as we have assumed they have the same subderivative we obtain a contradiction:
$$s(ap'+b)+ (1-s)(ap''+b)=ap'''+b > \phi(p''')$$
Therefore for all $p^{*}\neq P$:
$$\phi \circ g(p^{*}) > ag(p^{*})+b$$ Now partition the sample space as follows: $\Omega_{t}^{1}=\Omega_{t} \backslash \{p_{T}^{*}: p_{T}^{*}=P_{t}\}$ and $\Omega_{t}^{2}=\Omega_{t}\backslash \Omega_{t}^{1}$
This allows me to decompose the condition for $\Delta =1$ as follows:
$$\int_{\Omega_{t}^{1}} \phi \circ g \; \mathrm{d}\mu + \int_{\Omega_{t}^{2}} \phi \circ g \; \mathrm{d}\mu = \int_{\Omega_{t}^{1}}(ag +b) \; \mathrm{d} \mu + \int_{\Omega_{t}^{2}} \; (ag+b) \;\mathrm{d}\mu$$
Now we can cancel the second expression on each side because we know they are always equal on every set in $\Omega_{t}^{2}$ and we can invoke uniform continuity of the Lebesgue integral to extend this to the sigma-algebra. Rearranging and reparametizing with the function $h=\phi \circ g - (ag+b)$ yields the condition:
$$\int_{\Omega_{t}^{1}}h \; \mathrm{d}\mu=0$$
Now the next step is to prove that the set $\Omega_{t}^{1}$ has measure zero. Since $h \geq 0$ I can apply Chebyshev's inequality which states that for any $\epsilon > 0$:
$$\mu(\{h > 0 \})\leq \frac{1}{\epsilon}\int_{\Omega_{t}^{1}} h \; \mathrm{d}\mu = 0$$. Take the union over sequences $\epsilon_{k}\searrow 0$ to obtain $\mu(\{h > 0 \})=0$. Now note that:
$$p \in \Omega_{t}^{1} \Rightarrow h(p) > 0 \Rightarrow p \in \{h > 0 \}$$
So $\Omega_{1}^{t}\subseteq \{h > 0 \}$ thus $\mu (\Omega_{t}^{1})=0$.Now this means every subset of $\Omega_{t}^{1}$ must have zero probability including every individual reset price $p$ however this contradicts the definition of $\Omega$ that a positive fraction of firms are selling at price $p$. Therefore $\Omega_{t}^{1}$ is the empty set and $\Omega_{t}^{2}=\Omega$, so $\Delta=1$ if and only if every reset price $p^{*}=p$. To prove $p_{i}=P$ write the price level as $P=\int_{i}p_{i}(\frac{p_{i}}{P})\mathrm{d}{\mu}$.Note that with all firms setting the same price $p_{i}$ is independent of $\mu$ which with a dispersed price level would reflect the share of firms resetting prices at a certain date. We can therefore factorize $p_{i}$ from the integral to leave $P=p_{i}\Delta$ since I have already shown that $\Delta =1$ when all $p_{i}=P$, the proof is complete.
\subsection{Definition 1 and Lemma 3 Material}This subsection contains extensions of Lemma 3 to cover various nominal indexation schemes proposed in the literature and the example cited in the text.
\subsubsection{Taylor Pricing Example} Here is an example where under Taylor contracts non-zero inflation eliminates price dispersion. All contracts last two periods so the price level and dispersion are given respectively by $$P_{t}^{1-\theta}=\frac{1}{2}(p_{t}^{*})^{1-\theta}+\frac{1}{2}(p_{t-1}^{*})^{1-\theta}$$ $$\Delta_{t}=\frac{1}{2}(\frac{p_{t}^{*}}{P_{t}})^{-\theta}+\frac{1}{2}(\frac{p_{t-1}^{*}}{P_{t}}^{*})^{-\theta}$$ Suppose $\theta=2$, $p_{0}^{*}=1$, $p_{1}^{*}=2$ which solves to give price level $P_{1}=\frac{4}{3}$ and price dispersion $\Delta_{1}=\frac{10}{9}$. Now consider time $t=2$ the firms that set there price in period $0$ now get to reset their price. Therefore the reset price $p_{0}^{*}=1$ is replaced by $p_{2}^{*}$ with $p_{1}^{*}=2$ the other price in the economy. Now applying Lemma 2 $\Delta_{2}=1$ if and only if $p_{2}^{*}=2$. This implies $P_{2}=2$ then inflation is non-zero in fact $\pi_{t}=50\%$.
\bibliographystyle{plainnat} % or try abbrvnat or unsrtnat
\bibliography{jmp.bib} % refers to example.bib
% \printbibliography
% javascript:void(0);
\end{document}
GENERALISATION TO 2 VARIABLES .... NOTE CAN'T INTRO OUTPUT GAPS EQUILIBRIUM TALK UP RBC!! THERE'S NO E DSGE... TALK UP SLP ... COUNTERPOINT TO IDENTIFICATION ISSUE IS THAT IF MODELS FAIL TO FIT THE DATA WITH AR ERRORS ADDED THEN BY IMPLICATION ITS NOT RE THAT IS THE MAIN PROBLEM- LINK INTUITION KENNEDY'S TEXTBOOK + PAPER BY MCALLUM THAT SERIAL CORRELATION IS SIGN OF MISSPECIFICATION ... POST-KEYNESIAN IDEA STEVE KEEN MATHEMATICS CONSTRUCT RELAXES NOTION OF OPTIMALITY ... ERGODICITY MINIMUM EPISTEMOLOGICAL CONCEPT USEFUL TO UNDERSTAND SCIENTIFIC OCCURENCES... NEW TYPE DEEP MARKET FAILURE ... NEW EQUILIBRIUM INDEPENDENT INTEREST OTHER BRANCHES ECONOMIC THEORY + MATHEMATICAL ECONOMICS...
INEFFICIENCY CAUSED BY RECESSION MIGHT BE RESULT OF TAYLOR CONTRACTING ... SUBSECTION 3.3 TO GIVE US APPLICATIONS ... THE NEW KEYNESIAN MODEL IS MORE KEYNESIAN THAN YOU THINK ...
STEVE KEEN USING WRONG MATHS QUOTE ... Leijonhufvud, Axel ... ECON-PHYSICS LITERATURE ...
\section{Panel Econometrics}
However, the notion that price dispersion is permanent even if the shocks driving it are not can be retained throughout the class of New Keynesian models if attention is restricted to the predominant solution concept in this literature class of non-stochastic equilibrium.
The problem is that real marginal costs are hard to observe and empirical findings tended to be sensitive to which proxy was used. For this reason and the sake of parsimony it is common to eliminate marginal costs and replace it with an expression involving the inflation gap which coincides $\pi_{t}$ as there is no trend inflation and a measure of the output gap $\hat{y_{t}}$. This is where the error comes in. \subsection{Phillips Curves with Price Dispersion}
New Keynesian model economists teach there students price dispersion is second order i.e $\hat{\Delta}=0$ whilst $\hat{\Delta}^{2}\neq 0$ \cite{gali2009monetary}\cite{walsh2010monetary}\cite{woodford2011interest}. However, as Lemma 1 clarified this property is unique to the non-stochastic steady state, otherwise price dispersion is strictly positive $\hat{\Delta}> 0$. To incorporate this insight into business cycle modeling I simply have to change the point about which I carry out linearization to one where $\Delta > 1$. More specifically I linearize about the point $(\pi,y^{e},\Delta)=(0,0,\bar{\Delta_{t}})$. Note that linearizing around the flexible price equilibrium, where the efficient output gap is zero, is for maximum comparability with Section 2.6 and is arbitrary unlike the choice of $\pi=0$\footnote{The focus in this paper is on the effect of stochasticity on $\Delta$ and its implication for the New Keynesian Phillips curve. It follows immediately from Lemma 1 and is well-known already in the literature that a non-stochastic steady state with positive inflation yields positive price dispersion i.e. $\hat{\Delta > 0}$ see \cite{ascari2014macroeconomics} for an exposition of these so-called trend inflation models }. Note that
Comment: We don't need all the backslashes- as sets are not ordered we are allowed repeat elements
\backslash \{p_{\tau'}^{*}: p_{\tau}^{*}=p_{\tau'}^{*}, \; \forall \tau' > \tau \}
\backslash \{p_{\tau'}^{*}: (1+\bar{\pi})^{T-\tau}p_{\tau}^{*}=(1+\bar{\pi})^{t-\tau'}p_{\tau'}^{*} \forall \tau'> \tau \}
\backslash $$ $$\{p_{\tau',j'}^{*}: p_{\tau,j}^{*}=p_{\tau'}^{*}, \; \forall \tau' > \tau \; \mbox{or} \; \tau' = \tau , {j'> j} \} \}
For all of these models the "tie-break" rule used to remove identical reset prices set at different times or for different contract lengths is irrelevant. Such a rule is required to define the measure of prices taking the particular value $p$.
+\delta_{0}\alpha^{t}
Where $p_{0}^{*}$ indexes the first observed reset price and
$$\mu_{t}(p)=\Sigma_{T=1}^{t}\hat{\delta}_{t,T}(p) \alpha ^{t-T}(1-\alpha)+\hat{\delta}_{t,0}\alpha^{t} $$
$$\Omega_{t}= \{p_{0,1}^{*}, p_{0,2}^{*}, \cdots , p_{0,J}^{*}, p_{1,1}^{*}, p_{2,1}^{*}, p_{2,2}^{*}, \cdots , p_{\tau,1}^{*}, p_{\tau,2}^{*} \cdots p_{\tau,\tau}^{*} , \cdots p_{t,1}^{*}, p_{t,2}^{*}, \cdots , p_{t,\min(J,t)}^{*}$$
Where time $\tau \leq J, \tau < t$. The corresponding measure is:
$$\mu_{t}(p)=\sum_{j=1}^{\min{k,t}}\delta_{t-(j-1),j}\gamma_{j}/\min{j,t+1}$$
Note that the second subscript now indicates contract length and $J$ is the maximum contract length. $\gamma_{j}$ is the share of firms with contract length $j$ so $\sum_{j=1}^{J}\gamma_{j}=1$. $1 / \min{j,t+1}$ gives the share of firms with contract length $j$ resetting prices at time $t-j-1$, finally there is an appropriate indicator function $\delta_{T,j}$ for each price in $\Omega_{t}$. The $\min{J,t}$ term is a figment of assuming the economy begins at time $t=0$ therefore for times $t< J$ there are contract lengths that have not come up for renewal yet. This restriction is not important. One could allow the economy to have an arbitrarily long-history without affecting any results. All we have to do is to adapt the the $\sigma$ algebra and probability measure to
Note that by approximating the economy around its stochastic steady-state which is an ergodic distribution this 'burn-in' period for the model is dispensed with.
Where contract length $l<k$.\footnote{I assume here that there are a finite set of prices in the economy at any time $t$. The result would extend to any countable infinite set of prices. All we need is a positive measure of individual prices $p$ away from the aggregate price level $P$. } \begin{proposition}Within the class of New Keynesian models.A non-stochastic equilibrium where $\pi_{T}=0 \forall T>t_{0}$ implies $\hat{\Delta_{T}}=\bar{\Delta}= \Delta_{t_0}\forall \forall T>t_{0}$ \end{proposition}
The linearization theorem allows me to map one-to-one (biject) between the linear and the non-linear system subject to determinacy in the linear system. This allows me to establish local uniqueness properties of the non-linear system with the linear system. Therefore log-linearizing about the point $(\Delta_{t_{0}},0)$ interpreting it as an equilibrium that reset prices must then be equal at this point $p^{t}=p^{t-M}$ yields the intermediate result that $\pi_{t}=\frac{1}{M}(\hat{p_{t}^{*}}-\hat{p_{t-M}^{*}})$ which means that
\begin{equation} \hat{\Delta_{t}}=(1+\frac{\theta}{\hat{\Delta}_{t_{0}}})\pi_{t} + \hat{\Delta_{t-1}} \end{equation}
However we know that $\pi_{t}=0$ hence $\hat{\Delta_{t}}=\hat{\Delta_{t-1}}=\hat{\Delta_{t_{0}}}$. Next use the expression for $P_{t-1}$ to generate the following expression for inflation in terms of the new reset price $p_{t}^{*}$ and the old reset price $p_{t-M}^{*}$ which it has replaced. or it is proposed that the economy can be modeled as two or more non-stochastic equilibrium.
However, by assumption$\Delta_{t_{0}}>1$ by the price dispersion evolution equation (21) we know that $\Delta_{t}=1-\alpha +\alpha \Delta_{t-1}$ which solves to give
$\Delta_{t}=\alpha^{t-t_{0}}\Delta_{t_{0}}$ an equilibrium exists for the sub-system $\pi_{t},y^{e}_{t}$ i.e. and $y_{t}=\bar{y}$ q_{i} Where $q_{i}$ gives the share of each variety. \begin{proposition}If at a particular time $t_{0}$ i.e. $\Delta_{t_0}> 1$ $\hat{\Delta_{t_0}}> 0$ there cannot exist a non-stochastic equilibrium with equilibrium price dispersion $\hat{\bar{\Delta}} = 0$. \end{proposition}
to the basic Taylor model because as I show in proposition 4 a non-stochastic equilibrium exists. However, they return when the model is generalized to allow for a distribution of contracts or when trend inflation is included. Crucially, however price dispersion will continue to exist even when the shock process causing it is turned off. This invalidates the concept of non-stochastic zero-price dispersion equilibrium. In other words any non-stochastic zero equilibrium preserves price dispersion resulting from previous inflationary or deflationary spells. As established in the previous remark this result holds trivially for the Calvo model (where it also applies to non-stochastic equilibrium paths.) Therefore all that is required is a generalization to other models in the New Keynesian literature those with Taylor contracting. The generalization to the Taylor model is simple.
Therefore as stated there is non-zero price dispersion in non-stochastic steady state equilibrium. In fact existing price dispersion is not eliminated at all. Note all we have used is
\\ The extension to Generalized Taylor is straightforward. In a Generalized Taylor economy there are a spectrum of contract lengths with staggered price resetting. $\gamma_{j}$ is the share of contracts with length $j$ so in each period $\gamma_{j}/j$ is the fraction of contracts of length $j$ that are reset. $J$ is the maximum length. Therefore the price level takes the form \begin{equation}P_{t}^{1-\theta}= \gamma_{1}{p_{1,t}^{*}}^{1-\theta} + \frac{\gamma_{2}}{2}{p_{2,t}^{*}}^{1-\theta} + \cdots + \frac{\gamma_{J}}{J}{p_{t}^{*}}^{1-\theta} +\frac{\gamma_{2}}{2}{p_{2,t-1}^{*}}^{1-\theta} +\frac{\gamma_{3}}{3}{p_{3,t-1}^{*}}^{1-\theta}+\cdots + \frac{\gamma_{J}}{J}{p_{t-1}^{*}}^{1-\theta}+ \cdots +\frac{\gamma_{J}}{J}{p_{t-J}^{*}}^{1-\theta}\end{equation}
The expression for price dispersion is
\begin{equation}\Delta_{t}= \gamma_{1}(p_{1,t}^{*})^{-\theta}+ \frac{\gamma_{2}}{2}{p_{2,t}^{*}}^{-\theta} + \cdots + \frac{\gamma_{J}}{J}{p_{t}^{*}}^{-\theta} - \{ \gamma_{1}(p_{1,t-1}^{*})^{-\theta} + \frac{\gamma_{2}}{2}(p_{2,t-2}^{*})^{-\theta}\}+ \frac{\gamma_{J}}{J}{p_{t-J}^{*}}^{1-\theta}\} + (1+\pi)^{\theta}\Delta_{t-1}\end{equation}
REPLACED PRICES... DISPERSON NEW RESET - DISPERSION DISAPPEARED PRICES + yesterdays scaled up Substituting in $\pi=0$ and $p_{t}^{*}= p_{t-M}^{*}$ implies $\Delta_{t}=\Delta_{t-1}=\Delta_{t_{0}}$. Hence $\pi=0 \iff \Delta_{t}=\Delta_{t-1}=\Delta_{t_{0}}$. Note that $\pi=0 \iff p_{t}^{*}= p_{t-M}^{*}$ Second, to the policy rules associated with minimizing arbitrary forward-looking output- inflation loss functions- associated with so called inflation forecast targeting strategy advocated by \cite{svensson2010inflation}\cite{svensson2012evaluating} \cite{svensson2014inflation}.
\begin{lemma}$\hat{\Delta_{t}} > 0$ whenever $\pi_{t} \neq 0$.\end{lemma} This follows swiftly from lemma 2 and uses the fact that with non-trivial price rigidity that with non-trivial price rigidity last period's reset price $p^{*}_{t-1}$ carries a strictly positve weight $\mu$ in the current periods price level $P_{t}$ applying lemma 1 and the definition of inflation gives $\Delta_{t}\Leftrightarrow p_{t} \neq p_{t-1} \Leftrightarrow \pi_{t} \neq 0$ then taking the log-deviation completes the proof. This result contradicts lemma 1 (???) and proves that log-linearizing around the non-stochastic steady state does not characterize the local dynamics of the non-linear system. vvv
bbbbbbbbbbbb
zzzzzzzzzzz
iiiiiiiiiiiz
Proposition 2??? onward help to characterize behavior of $\Delta$
at a particular point in time without reference to its origin and
IMPORTANT As it does not use the firms optimization problem the result is completely general across all models admitting a nontrivial degree of price rigidity. Also as I only use the strict convexity of the demand curve which follows if we assume the local uniqueness of the optimal reset price the result can be generalized to allow for arbitrary heterogeniety between firms or a different demand system IMPORTANT
\subsection{Econometric Exercise}- which has the property that if \emph{almost every} firm sets the same price $p_{i,t}=p$ then this will be the price level so $P_{t}=p$. The second assumption that exists to ensure 'rigid prices' stay in the price level implies $\bar{\pi_{t}}$ does not change during the contract.because there are no restrictions on firms changing price
This restriction rules out stylized models based upon the price adjustment mechanisms in \cite{barro1972theory}\cite{sheshinski1977inflation}
\cite{rotemberg1982monopolistic}\cite{mankiw1985small} where there are only physical costs to price changing that do not differ among firms, as argued earlier such models are often unable to generate sufficient nominal rigidity. In any case physical costs of price changing vary substantially across firms and products see \cite{levy1997magnitude}- consistent with observed heterogeniety in the frequency of price adjustment as found in empirical studies such as \cite{dhyne2006price}\cite{dickens2007wages}\cite{dixon2012generalised}.An econometric exercise in Appendix A suggests these conditions are likely to be met whether the parameters are calibrated or estimated.\cite{andrews1988laws} actually proves a novel strong convergence result too but I have no use for it here. I found the exposition of \cite{hamilton1994time} very helpful.
For concreteness consider the popular Generalized Method of Moments Estimator \footnote{The approach was developed by \cite{hansen1982large} applied to rational expectations modeling by \cite{hansen1982generalized} and based upon method of moments estimation procedure first employed by Karl Pearson.} for the reduced-form parameter vector $R_{3 \times 3}$ of the relationship between $Q_{t}$ and $E_{t}Q_{t+1}$. It is necessary to have an $m \times 1 \; m > 3$ vector of available instruments $Z_{t} \in \mathcal{I}_{t}$ consistent with the orthogonality condition and associated estimator:
$$E_{t}[Z'_{t}(Q_{t}-Q_{t+1}R)]=0$$
To allow for the case of over-identification where the number of potential instruments in $Z_{t}$ exceeds the number of moment conditions $m > 3$ in the basic model I minimize the quadratic form of the orthogonality conditions $H_{T}(R)=[T^{-1}(Z'_{t}(Q_{t}-Q_{t+1}R))W_{T}(Z'_{t}(Q_{t}-Q_{t+1}R))']$. Here $W_{T}$ is a weighting matrix dependent on $\theta$ that will turn out to be inversely proportional to the variance-covariance matrix of the orthogonality conditions, as previously $T$ is the number of time observations. Optimization with continuous differentiability in $R$ yields the GMM estimator\footnote{Consult \cite{hansen1982generalized} or a textbook such as \cite{hamilton1994time} for a full exposition}
$$\hat{R}=(Q'_{t+1}Z_{t}W_{T}Z'_{t}Q_{t+1})^{-1}Q'_{t+1}Z_{t}W_{T}Z_{t}Q_{t}$$
with $\hat{R}=(Q'_{t+1}Z_{t})^{-1}Q'_{t+1}Q_{t}$
corresponding to the just-identified case where the orthogonality conditions are solved exactly.\\However, applying proposition 2 again shows that this estimator is not defined because $Q'_{t+1}Z_{t}=0$ meaning that the first matrix is non-invertible. This follows because $Q_{t+1}$ is comprised entirely of expectation errors which must be uncorrelated with any variable belonging to the information set at time $t$, $\mathcal{I}_{t}$ to which $Z_{t}$ belongs. Hence there are no valid instruments for the expectations of future macroeconomic variables in this system.
This implies the structural parameters $\theta$ are not identified. For proof suppose the converse that $\theta$ was identified then assumption 2 is sufficient to invoke \cite{hansen1982generalized} to prove weak convergence of the OLS estimator $\hat{\theta}_{T}\rightarrow \theta$\footnote{ }. Now consider the solution for the reduced form parameters in terms of their structural counterparts.
$$R_{11}=\frac{1}{1+\gamma(a_{y}+a_{\pi})}>0$$
$$-\infty < R_{12}=\frac{\gamma(a_{\pi}\beta-1)}{1+\gamma(a_{y}+a_{\pi})}< \infty$$
$$R_{13}=0$$
$$R_{21}=\frac{a_{y}+a_{\pi}\omega}{1+\gamma(a_{y}+a_{\pi})}>0$$
$$R_{22}=0$$
$$-\infty< R_{23}=\frac{a_{y}\gamma(a_{\pi}\beta-1)+a_{\pi}[\omega \gamma(a_{\pi}\beta - 1)+\beta(1+\gamma(a_{y}+a_{\pi}\omega))]}{1+\gamma(a_{y}+a_{\pi})}< \infty$$
$$R_{31}=\frac{\omega}{1+\gamma(a_{y}+a_{\pi})}>0$$
$$R_{32}=0$$
$$-\infty< R_{33}=\frac{\omega \gamma(a_{\pi}\beta - 1)+\beta[1+\gamma(a_{y}+a_{\pi}\omega)]}{1+\gamma(a_{y}+a_{\pi})}< \infty$$
Note that each reduced form parameter $R_{ij}$ is a composite of continuous functions and is therefore a continuous function of the structural parameters $\theta$ see \cite{aliprantisborder}. Denote this function by $\digamma_{ij}$ so $digamma_{ij}=R_{ij}$. Therefore by the continuous mapping theorem of \cite{mann1943stochastic} $\lim_{T \rightarrow \infty}\hat{\digamma_{ij}}=R_{ij}$ in probability. Therefore there would be a one-to-one mapping between limiting probability distributions over the reduced-form parameters (a unitary mass at the true value) and probability distributions over the observables $Q_{t}$. Therefore the reduced form parameters would be identified a contradiction. Therefore the structural parameters must be unidentified. For generality whether a model is identified is allowed to vary with time $t$ with the limit ${T \rightarrow \infty}$ corresponding to the asymptotic distribution. \footnote{Proving that this solution is unique is more complicated since the dimensionality of the system is increased when we allow future endogenous variables in the policy rule. Example B in \cite{blanchard1980solution} illustrates how stability and uniqueness would be analyzed in this case. It is also necessary to impose a no explosion condition to ensure existence. In particular this rules out interest rates set as absolutely convergent sums of $\pi$ and $y$ terms. Fortunately, this problem does not arise with optimal policy setting in this three equation set up, as shown in Section 4. It also does not arise in my own characterization of optimal policy given in Section 5/6 ?}For example he showed that the marginal propensity to consume in response to a change in disposable income would be changed by the instigation of a government transfer program. would fail this criterion due to the operation of a signal extraction mechanism. Relations among real quantities would pass the critique in their log-linearized form. Although not in general for non-linear relationships. \footnote{Results vary according to how money is introduced into the classical models- with money-in-the-utility function- the critique applies to all relationships including those between real variables. }between a subset $Z^{0}_{t}$ of the variables described by $\mathfrak{S}_{\gamma}$ and $Z^{0}_{t}$ under $\mathfrak{E}_{\digamma(\gamma)}$ $\forall \gamma \in \Gamma$ and all possible realizations $Z_{t}\in \mathcal{Z_{t}}$ setting all shock terms in each model to their expected value. PROP 3 early version .... \begin{proposition}In a model with Calvo pricing, a non-stochastic equilibrium $(\bar{\pi},\bar{y})$ will never exist from any future date $t> t_{0}$ if there has ever existed price dispersion $\Delta_{t_{0}}>1$. \end{proposition} Problem didn't admit trend inflation or sectoral shocks hetero expectations...
an econometric model $\mathcal{E}$ is a possibly random continuous\footnote{Here continuity is defined with respect to the usual topology on $\mathbb{R}^{n}$ for every possible parameter realization.} mapping $m: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ from an $n\times 1$ vector of endogenous variables $Z_{t}^{\mathcal{E}}$ to their next period realization $Z_{t+1}^{\mathcal{E}}$ with shocks $v_{t}^{\mathcal{E}}$ $$Z_{t+1}^{\mathcal{E}}=m(Z_{t}^{\mathcal{E}},v_{t}^{\mathcal{E}})$$ and parameters $M$ where $M_{0}$ denotes a particular parametization and $\mathcal{B}_{M}$ is the Borel space\footnote{A Borel space is the family of all sets that can be formed by the countable union, finite intersection and complementation of open sets. It is a topological space on which a probability measure (a prerequisite for estimation) can be defined.} containing all possible $M_{0}$. To avoid the complications from randomness for now consider the expeccted value $\hat{Z}^{\mathcal{E}}_{t}$ occurring when the shocks are set to there expected value $v^{\mathcal{E}}_{t}=\bar{v}^{\mathcal{E}}$ so $\hat{Z}^{\mathcal{E}}_{t}=m(Z_{t}^{\mathcal{E}},\bar{v}^{\mathcal{E}})$.
Here I use the term \emph{econometric} for congruity with Lucas' seminal work. It differs from the common definition of an econometric model in that there is no requirement that it be identified (with respect to frequentist estimation on a particular dataset.\footnote{One needs to believe the model is identified with respect to the universe of \emph{all relevant information} including prior beliefs and results from previous micro-econometric studies that might be used to obtain calibrations or prior probability parameters for Bayesian estimation. This is not a practical limitation. Although, the relative merits of different econometric strartegies will be discussed in quantitative Section 7 and Section 9 the conclusion.})The definition for a structural model is analogous with a continuous function $f$ so $$Z_{t+1}^{\mathcal{S}}=f(Z_{t}^{\mathcal{S}},v_{t}^{\mathcal{S}})$$ with the focus on the non-stochastic $\hat{Z}^{\mathcal{S}}_{t}=f(Z_{t}^{\mathcal{S}},\bar{v}^{\mathcal{S}})$.
with $F$ continuous on the product space $\mathcal{M_{S}}=\mathcal{M}_{\Gamma}\times \mathcal{M}_{Z}$ where $\mathcal{M}_{\Gamma}$ and $\mathcal{M_{Z}}$ are metric spaces of respectively the parameters and the state space $\mathcal{Z}$