1 Introduction
As important determinants of investment and consumption decisions, expectations are an important field of research in macroeconomics. Especially in the attempt to un- derstand the dynamics and fluctuations in economies, various theories of expectation formation have been developed. As a first model, the theory of adaptive expectations assumed businesses and individuals to form their expectations based on past realizations while also taking into account their previous errors.
With the publication of “Rational Expectations and the Theory of Price Movements” (Muth 1961) the more complex concept of rational expectations gained popularity in macroeconomics. The theory assumed agents to form expectations based on correctly anticipating the future development of a specific economic variable based on all infor- mation that is currently available. Economists like Lucas argued in favor of integrating rational expectations in macroeconomic models (e.g. Lucas 1972).
Even though there is statistical evidence against the full-information rational expec- tations (FIRE) hypothesis, it still underlies most modern macroeconomic models. To provide an economically interpretable assessment of the FIRE hypothesis, Coibion and Gorodnichenko develop a new approach to test it. The new test which is presented in “Information Rigidity and the Expectations Formation Process: A Simple Framework and New Facts” (Coibion and Gorodnichenko 2015) provides results that are meaningful in appraising the FIRE hypothesis and modern models of rational expectations which assume information rigidities.
The paper will be structured as follows: Section 2 will deal with the FIRE hypoth- esis and the traditional tests of it, as well as with modern rational expectation models which incorporate information rigidities. Section 3 will present Coibion and Gorod- nichenko’s new approach to test the FIRE hypothesis which is subsequently discussed and compared to other tests in section 4. Section 5 concludes.
2 Models of Expectation Formation
2.1 The Full-Information Rational Expectation Hypothesis
The theory of rational expectations that is outlined by Muth (1961) models agents as if they know the underlying model. This means that agents’ expectations are equal to the predictions of the underlying theory. Put formally, agents’ subjective probability
1
density function of outcomes, fi, is identical to the probability density function of outcomes according to the model, f (Muth 1961, p. 316):
fi(xt+h|Ωit) = f(xt+h|Ψt) (1)
where xt+h is the variable of interest in period t + h. On the left hand side this variable is forecasted conditional to the complete information set, Ωit. On the right hand side it is predicted conditional to the public information set Ψt ⊆ Ωit which is a subset of the complete information set. Private information plays no role in Muth’s theory (Pesaran and Weale 2006, p. 721).
Since Coibion and Gorodnichenko (2015) focus on developing a new test for market rationality rather than rationality on the individual level, we can relax the previous definition of rational expectations and rewrite equation (1):
̄
f(xt+h|Ωit) = f(xt+h|Ψt) (2)
where f ̄ is the average subjective probability density function of outcomes. This def- inition only requires that agents form rational expectations on average. This allows for irrational expectation formation on the individual level, as well as for heterogenous information among agents (Pesaran and Weale 2006, p. 722).
Further, the rational expectation hypothesis asserts that all information is used efficiently (Muth 1961, p. 316). This feature is commonly called the orthogonality property and has various testable implications. In its basic form the orthogonality condition states that the rational forecast error is uncorrelated with any information from the public information set:
E(vt+h,t|St) = 0 (3) where the rational forecast error vt+h,t is defined as the difference between the realization
of the variable of interest and the rational forecast of it formed in period t,
vt+h,t = xt+h − Et(xt+h), (4)
and St is a subset of the public information set (Pesaran and Weale 2006, p. 721). The idea behind the orthogonality condition is that the forecast error cannot be predicted by taking public information into account. This means that agents cannot improve their forecasts when using all the information that is available to them efficiently.
2
However, in the context of the FIRE hypothesis, this condition cannot be tested since it would be impossible to take all available information into account. Therefore, the orthogonality condition is toned down to state that the rational forecast error is unpredictable when taking into account the public information subset St. Depending on which information from the public information set the subset St contains, different degrees of rationality can be defined. Hence, St is used as a proxy for the public information set Ψt.
According to Bonham and Dacy (1991, pp. 247 sq.) a forecast is weakly rational if it satisfies the necessary conditions which are outlined in the following. First, the forecast has to be unbiased, meaning on average agents may not systematically over- or underestimate the variable to be forecasted. Second, the forecast error vt+h,t cannot be predicted from past realizations of the variable of interest. Third, the forecast error has to be serially uncorrelated. In the context of informational efficiency this condition is necessary since past forecast error constitute information that can be potentially used to improve the forecast. If, furthermore, the forecast error is uncorrelated with any variable from the public information set it is called “sufficient rational” (Bonham and Dacy (1991, p. 248)). Additionally, forecasts are described as “strongly rational” (Bonham and Dacy (1991, p. 248)) if their accuracy cannot be enhanced by combining them with other forecasts (see for example: Ashley, Granger, and Schmalensee (1980) and Fair and Shiller (1990)).
Following Coibion and Gorodnichenko (2015), we will call empirical tests of the necessary and sufficient conditions “traditional tests”. One variation of such a test was employed by Coibion and Gorodnichenko (2015, p. 2654) before presenting their new approach to test the FIRE hypothesis. The test is employed in the context of inflation forecasting. The authors use data from the Survey of Professional Forecaster (run by the Philadelphia Fed) which consists of 30-40 professional forecasters. The choice to use this data set stems from the consideration that this constitutes a useful bench- mark since evidence for information rigidities among professionals would also indicate information rigidities throughout the (on average less informed) economy (Coibion and Gorodnichenko 2015, p. 2652). Their traditional test is specified as follows:
πt+3,t − Ft(πt+3,t) = c + γFt(πt+3,t) + δzt−1 + errort (5)
Forecast error
where πt+3,t is the average inflation over the year ahead (current quarter plus next three quarters) and Ft(πt+3,t) is the average forecast of it across agents. The idea behind the specification is to regress the average forecast error on past realizations and a subset
3
of publicly available information. This subset is embodied by the control variable zt−1 which includes lagged values of inflation, the average rate for three-month US bonds, the quarterly log change of the oil price and the average unemployment rate. These control variables have been selected because they potentially have predictive power (Coibion and Gorodnichenko (2015, pp. 2645 sqq.)). Under the assumption that forecast errors are serially uncorrelated this specification tests sufficient rationality. Formally, the null hypothesis states:
H0: c=0 γ=0 δ=0
In addition to the features implied by the orthogonality condition, the first part of the null hypothesis states that the intercept is equal to zero. This represents the assumption that the forecast error is not generally distorted, i.e. there is no system- atic over- or underestimation which is independent from the information considered by agents.
The results of this test are outlined in table 1 (concluded version of panel A from table 1 in Coibion and Gorodnichenko (2015, p. 2653)):
Table 1: Coefficient estimates from the traditional test
Intercept c Forecast Ft(πt+3,t) Control zt−1
None
−0.181 0.059
Inflation
−0.045 −0.299∗∗ 0.318∗∗
Bond rate
−0.091 0.210∗ −0.125∗
Oil price
−0.181 0.045 1.603∗∗
Unemployment rate
1.449∗∗ 0.095 −0.281∗∗
**: p<0.5,*: p<0.1
Consequently, they reject the null hypothesis of the FIRE hypothesis as all of the control variables can significantly predict the forecast error which implies that agents on average do not form rational expectations. The rejection of the null hypothesis however does not give us any information about the economic significance of the rejection or clues about the reasons for the rejection (Coibion and Gorodnichenko 2015, p. 2645).
Possible reasons for the rejection could be information rigidities that arise if agents do not have full access to all publicly available information at any time. Thus a re- jection of the FIRE hypothesis might not be reasoned in agents forming “irrational expectations” in the sense of interpreting available information wrongly but may be
4
due to imperfect access to information. According models which incorporate informa- tion rigidities will be the subject of the next three subsections.
2.2 Sticky Information Models
One of the most important information rigidity models is the sticky information model which was proposed by Mankiw and Reis (2002) in order to present an explanation of nominal rigidities. In their fundamental model, only a fraction (1 − λ) of producers update their information set which enables these producers to obtain full information. This allows them to form forecasts consistent with the FIRE hypothesis. This infor- mation is then used to compute a path of optimal prices according to the price setting formula
p∗t =pt+αyt (6)
where p∗t is the optimal price, pt the overall price level, α the degree of stickiness and yt the output gap. This relationship resembles the short-run Philips curve interconnection according to which higher prices are charged in booms and lower prices are charged in recessions (Mankiw and Reis 2002, pp. 1299 sqq.).
Since not all firms update their price in every period some firms set their price based on outdated information. The price which is charged in period t by a firm whose most recent information set update was j periods ago is defined as
xjt =Et−j(p∗t) (7) which is the optimal price rationally calculated on the basis of a (full) information set
obtained j periods ago. The aggregate price level is then expressed by
∞
pt =(1−λ)λjxjt j=0
∞
=(1−λ)λjEt−j(pt +αyt) (8)
j=0
which is the weighted average of all the prices which are charged (Mankiw and Reis
2002, p. 1300). 1 An intuitive explanation of the reasons why firms do not continuously
1The notation here is slightly altered compared to the original paper by Mankiw and Reis (2002) for the purpose of a consistent notation since Coibion and Gorodnichenko (2015) deviate in the same way.
5
update their information is given by Reis (2006) who shows that this behavior is optimal. Under the assumption that gathering and processing information is costly (Mankiw and Reis 2002, p. 2316, Reis 2006, p. 795), producers maximize their expected present discounted profit including these costs. Solving this maximization problem is done by weighing the value of updating information and prices against the incurred costs (Reis 2006, p. 802).
As could be suspected the degree of inattention increases with planning costs and decreases with the volatility of shocks in which case inattention is especially costly (Reis 2006, p. 803). Carroll (2003) supports the assertion that agents are able to form (close-to-)rational expectations when updating their information sets. It is shown that forecasts by professionals – which come closest to being rational – spread slowly through the population where they are adopted by agents (Carroll 2003, pp. 269 sqq.).
Coibion and Gorodnichenko (2015, pp. 2648 sqq.) proceed to generalize this model to make it applicable to other contexts. In the following specification not the overall price level is computed by weighting individual forecasts, but more generally the average time-t-forecast of variable x at time t + h is computed by weighting individual forecasts which are formed at different points in time:
∞
Ft (xt+h ) = (1 − λ) λj Et−j (xt+h )
j=0
= (1 − λ)Etxt+h + λFt−1(xt+h) (9)
In the bottom line of equation above the idea becomes clearer: The average forecast across agents is composed of a share (1−λ) of agents who form truly rational forecasts, i.e. based on all information that is currently available, while the remaining share λ forms forecasts based on older information. Within this remaining share is a fraction (1 − λ) who formed rational forecasts based on all information available at time t − 1 (within the whole population this is a share of (1 − λ)λ) while the remaining fraction formed their forecast based on even older information (overall share of (1−λ)λ2). This argument carries on infinitely up to the agents who formed their expectation of xt+h based on the full information set ∞ periods ago. It is noteworthy that these expectations suffice the FIRE hypothesis at the time they were formed since they incorporate all available information at that time efficiently. However, at the time of the evaluation (period t) these forecasts are most likely deemed irrational since the agents at that time did not and could not consider new information which became available in the
6
meantime and would improve the forecast. This again elucidates the difficulty with the term “irrational” since it does not imply that agents formed arbitrary forecasts – contrary to what might be supposed at first sight of the expression.
Ensuing, Coibion and Gorodnichenko (2015, p. 2649) combine equation (9) with the definition of the rational forecast error (equation (4)) to present a relationship between the forecast error and forecast revisions:
xt+h − Ft(xt+h) = λ (Ft(xt+h) − Ft−1(xt+h)) + vt+h,t (10) 1−λ
In the relationship pointed out above the coefficient on forecast revisions solely depends on the degree of information rigidity λ. This finding is compatible with our intuition: If all agents update their information sets on which they form expectations in every period ((1 − λ) = 1 ⇒ λ = 0) no information rigidities would be present. Thus the forecast error would be unpredictable and the average forecast would satisfy the FIRE hypothesis.
2.3 Noisy Information Models
In the class of noisy information models agents are modeled as if they know the un- derlying economic model and continuously update their information sets. In contrast to the classical theory of rational expectations though, agents do not observe the true state of the variable they are interested. Instead they only observe a signal that is afflicted with noise. In order to form expectations procedures to detach this noise have to be performed.
A relatively intuitive and the most famous model of this class is the Lucas islands model. In the model developed by Lucas (1973, pp. 327 sq.) a number of separated competitive markets (“islands”) exist. On each island (denoted by z) is one producer who charges the price pt(z). By assumption, the price level on each island is based on the overall price level and differs only due to market specific shocks zt that are zero on average:
pt(z) = pt + zt (11) Based on the price pt(z) relative to the overall or average price level pt the producer
chooses to produce a specific quantity. This is apparent in the supply curve postu-
7
lated by Lucas (1973, p. 327). However, since the overall price level is not observable, producers have to form rational expectations about it and the equation above becomes
pt(z) = E(pt) + zt
=pt −vt +zt (12)
where vt is the rational forecast error (compare equation (4)).
The rational forecasts error can be attributed to an aggregate shock which requires
no response because it only implies nominal changes. On the other hand a market specific shock affects the producer in real terms and thus requires a change in supplied quantity (Blanchard and Fischer 1989, p. 356). The problem the producers face is that they cannot distinguish between a market specific shock and an aggregate (nominal) shock since only the change in price on the respective island, pt(z), is observed. To solve this problem a method is proposed which relies on historical data on vt and zt to generate a forecast that uses past information to estimate the probability of both types of shocks (Lucas 1973, pp. 328 sqq.).
The noisy information model of Woodford (2001) on the other hand makes use of the Kalman filter in extracting the signal we are interested in – such as the overall price level above. In this model the variable of interest, xt, is assumed to follow a first-order autoregressive process:
xt = ρxt + ut (13)
where ρ ∈ (0, 1) is the degree of serial correlation, ut is white noise with mean zero, i.e. unpredictable, and the value of x only depends on its prior realization (Woodford 2001, p. 13, Coibion and Gorodnichenko 2015, p. 2650). Again the agent cannot observe the real state of this variable but a noisy signal of it,
yit = xt + ωt, (14) where ωt is mean-zero white noise (Woodford 2001, p. 14, Coibion and Gorodnichenko
2015, p. 2650). Under the assumption that the agents have knowledge of the underlying
8
economic model, expectations of the variable we are interested in can be formed on the individual level using the Kalman filter:2
Fit(xt) = Gyit + (1 − G)Fit−1(xt) (15) respectively, by recalling equation (13):
Fit(xt+h) = ρhFit(xt) (16)
where G is the Kalman gain which is the weight that is placed on new noisy informa- tion. Consequently (1 − G), the weight put on outdated forecasts, is called the degree of information rigidity (Coibion and Gorodnichenko 2015, p. 2650). As done before, Coibion and Gorodnichenko (2015, p. 2650) average forecasts across agents and combine the expression for the average forecast from the noisy information model with the defi- nition of the rational forecast error (equation (4)) in order to point out the relationship between the ex post mean forecast errors and ex ante mean forecast revision:
xt+h − Ft(xt+h) = 1 − G(Ft(xt+h) − Ft−1(xt+h)) + vt+h,t (17) G
Here, the coefficient on forecast revisions only depends on the Kalman gain G, i.e. the weight put on new noisy information. The Kalman gain is determined by factors such as the persistence of the series and the signal-to-noise ratio (Coibion and Gorodnichenko 2015, p. 2651). In the case that agents only rely on the most recent but noisy information (G = 1) the forecast error would be unpredictable. The fact that agents do not only rely on the newest information is intuitively described by Coibion and Gorodnichenko (2015, p. 2651)) who point out that agents do not solely rely on the new noisy information since they do not know if changes only reflect noise. Consequently, new information only enters the expectation formation process gradually.
2.4 Rational Inattention Models
Completing the overview of the three main types of information rigidity theories this subsection will deal with rational inattention models. Since Coibion and Gorodnichenko (2015) treat this class of models rather negligibly as the main implications are similar to
2The Kalman filter is a mathematical procedure to calculate the value of a variable which follows an autoregressive process if only noisy observations of it are known. It requires that the value which is to be estimated is described by a dynamic system of equations (Hamilton 1994, pp. 372 sqq.).
9
the class of noisy information models this subsection will be shorter than the previous two.
An exemplary rational inattention model was proposed by Sims (2003). Its main idea is that agents process information through a channel which acts as a constraint on information flow. In consequence the information on the basis of which forecasts are formed is obscured. This can be thought of as an input-output-transformation where the incoming information is the signal of interest and the output an error-afflicted version of it (Sims 2003, pp. 667 sq.). The signal-extraction problem now faced by the agent is similar to that in noisy information model where the agent only observes the tracked variable including noise (Sims 2003, p. 670).
Despite this striking similarity the rational inattention model assumes this error or noise term to be endogenous contrary to the assumption of exogenous noise in noisy information models. The endogenity results from the information channeling process where the proportion of the signal relative to noise is determined. Intuitively, noise is reduced when more resources are allocated to observing the variable, i.e. when the agents pay more attention (Sims 2003, pp. 687 sq.).
This is similar to the explanation of the sticky information model by Reis (2006, p. 795) who points out that agents rationally decide to be inattentive and not update their information set leading to forecasts based on outdated information. The key differ- ence in sticky information models is that agents obtain full information when updating their information set while in the case of rational inattention models (like in noisy in- formation models) agents virtually never observe the true state of the variable they are interested in. Hence the term “rational inattention” can have different meanings depending on the context.
The fundamental model by Sims has been adopted in the literature where it is for example extended by Maćkowiak and Wiederholt (2009) who develop a model of rational inattention by firms. In their model firms observe conditions affecting the whole economy or only their respective firm but cannot allocate enough resources to observe both environments perfectly at the same time which leads to a trade-off. Also the variables describing aggregate and individual conditions which are observed are endogenous (Maćkowiak and Wiederholt 2009, p. 771).
10
3 New Approach to Test the FIRE Hypothesis
3.1 Idea and Econometric Specification
Based on the relationship between ex post mean forecast errors and ex ante mean forecast revisions identified in the context of sticky and noisy information models (see equations 10 and 17) the authors specify a new test of the FIRE hypothesis. The new test arises straightforwardly from that relationship and allows to obtain an estimate for the degree of information rigidity as defined by sticky and noisy information models:
xt+h − Ft(xt+h) = c + β(Ft(xt+h) − Ft−1(xt+h)) + vt+h,t (18)
where the average forecast error is regressed on average forecast revisions. The error term is the rational forecast error and thus uncorrelated with any information so that the coefficients can be estimated by Ordinary Least Squares (Coibion and Gorodnichenko 2015, pp. 2651 sq.).
Analogous to traditional tests of the FIRE hypothesis we would expect that forecast errors are unpredictable since information about forecast revisions is available to agents. Under assuming the validity of the FIRE hypothesis this information would be part of the full information set and thus forecasts would also be conditioned on this information. Accordingly the null hypothesis, H0 : (c,β) = (0,0), would not be rejected if the assumption of FIRE holds. Additionally further information (described by control variables) should have no predictive power regarding the forecast error both in the traditional and the new test. Within the framework of traditional tests a statistical rejection of the FIRE hypothesis does not give insight about the economic relevance of this rejection. However, the new approach allows to appraise and interpret rejections in the context of information rigidity models since the coefficient on forecast revisions can be interpreted within the framework of models which incorporate information rigidities (Coibion and Gorodnichenko 2015, p. 2646).
In particular, the sticky and the noisy information model predict a positive rela- tionship between the ex post average forecast error and the ex ante average forecast revision. Formally, the null hypothesis in these information rigidity models is:
H0 : c = 0 β>0
11
The first part of the null hypothesis that the intercept is equal to zero is in the same spirit as in the traditional test and rules out systematic distortions due to the expectation formation process. The second part assumes a positive relationship between the average forecast error and the average forecast revision. Described in an intuitive way this means that on average agents revise their forecast not as much as they should revise it. This reflects the prediction by information rigidity models that the entirety of agents does not have full information at its disposal at any time.
Within sticky information models this predictability arises from the fact that some agents have updated their information sets and have full information and some did not update their information set and thus form expectations based on outdated information. Consider an exemplary economy where in 2015 agents on average expected the inflation rate in 2017 will be 5%. In 2016, new information credibly suggests that the inflation rate will climb to 10% until 2017. If we now suppose that half of the population receives this new information and the other half does not, the average forecast formed in 2016 about the inflation rate in 2017 will be: F2016(π2017) = 0.5 ∗ 5% + 0.5 ∗ 10% = 7.5%. Therefore, the average forecast error, π2017 − F2016(π2017) = 10% − 7.5% = 2.5%, is positively correlated with the average forecast revision F2016(π2017) − F2015(π2017) = 2.5%, and could have been predicted on this basis. In the context of noisy information models the same relationship is predicted since new information enters the expectation formation process only gradually.
It becomes apparent here that this predictability of forecast errors from forecast revisions occurs only on the aggregate level when it is tested if agents form rational expectations on average. On the individual level an according predictability would not occur since agents either do or do not update their information sets as argued by sticky information models. Similar, noisy information models argue in the same direction by stating that individual forecast revisions would be included in the information set available to the respective agent (Coibion and Gorodnichenko 2015, p. 2652).
3.2 Evidence for Information Rigidity in Forecasting Inflation
Analogous to the traditional test ran on US inflation forecast data by the Survey of Professional Forecasters, the authors employ their new approach on the same dataset. The test which is specified also includes the same lagged control variables described by the regressor zt−1 (Coibion and Gorodnichenko 2015, p. 2653):
πt+3,t − Ft(πt+3,t) = c + β(Ft(πt+3,t) − Ft−1(πt+3,t)) + δzt−1 + vt+h,t (19) 12
The authors obtain the following results from the test (concluded form of panel B from table 1 in Coibion and Gorodnichenko 2015, p. 2653):
Table 2: Coefficient estimates from the new approach
None
Inflation
−0.074 1.141∗∗ 0.021
Bond rate
0.151 1.196∗∗ −0.029
Oil price
−0.021 1.125∗∗ 0.576
Unemployment rate
1.134∗∗
1.062∗∗ −0.178∗∗
Constant c
Forecast revision 1.193∗∗ Control zt−1
0.002
**: p<0.5,*: p<0.1
The results deliver strong evidence against the null hypothesis of FIRE since forecast errors are predictable. This rejection of the FIRE hypothesis goes into the direction predicted by information rigidity models: In all specifications the coefficient on forecast revisions is greater than one and significant on the five percent level.
Additionally, the estimated coefficient βˆ can be interpreted within the framework of information rigidity models as is can be mapped into the degree of information rigidity. This will be illustrated by the exemplary estimate βˆ ≈ 1.19 obtained in the specification without controls. In the context of sticky information models a degree of information rigidity in the size of λˆ = βˆ/(1 − βˆ) ≈ 0.54 would have been estimated. This means that the average agent updates his information set every 1/(1 − λ) ≈ 2.2 periods, i.e. less than every six months. Similar, in noisy information models, an estimated Kalman gain of Gˆ = 1/(1 + βˆ) ≈ 0, 46 implies that the average agent puts a weight of less than 50 percent on new information reflecting the hesitancy due to possible unidentifiable noise (Coibion and Gorodnichenko 2015, p. 2653).
Excluding the case in which the unemployment rate is controlled for, the further results are also in line with the predictions from rational expectation models which incorporate information rigidities: The intercept is not significantly different from zero and additional control variables do not have significant predictive power. The find- ings in the last case where the unemployment rate is taken into account are puzzling and cannot be explained by the information rigidity models considered here. These unexpected deviations thus require further research (Coibion and Gorodnichenko 2015, pp. 2654 sq.).
3.3 Further Insights
Finally, the authors find evidence for the state-dependence of information rigidities. Running the new approach regression on the inflation data from the Survey of Profes- sional Forecasters, it is found that the degree of information rigidities increases during
13
times of low volatility while it increases during times of high volatility (Coibion and Gorodnichenko 2015, p. 2671). These results are in accordance with rational inattention models that predict a lower degree of inattention in erratic times since it is especially costly not to pay attention then. The same pattern is also found by Wieland (2013, pp. 21 sq.) who examines inflation forecast data from a range of high-income countries using a very similar approach.
4 Discussion
4.1 Robustness of Results
To confirm the robustness of their results, the authors run a variety of tests on the underlying assumptions of their model and extend the scope of their examination. In this subsection we will focus on a selection of these robustness tests for the purpose of keeping it short. First, it is considered whether heterogenous loss-functions could obscure results. In particular asymmetric loss functions are examined here, this effect occurs for example when agents are significantly averse to overestimating the variable compared to underestimating it. No evidence that such behavior could qualitatively change the findings is found. Second, the authors consider a similar phenomenon that arises when agents under-react to new information in order to minimize their forecast revision to smoothen their forecast. This conjecture is rejected with the help of a test which is specified similar to the new test proposed by the authors and also accounts for lagged forecast revisions (Coibion and Gorodnichenko 2015, pp. 2659 sqq.).
Additionally, as the model is praised to work universally, i.e. different variables can be forecasted across different horizons and among different types of forecasters, Coibion and Gorodnichenko run their new approach on different datasets. Their findings suggest that the model is able to recover estimates of information rigidities in forecasting other variables, also when they are not forecasted by professionals (as tested before with SPF data). However, they find that the degree of information rigidities varies not only across agents, but also across variables and differs with the forecast horizon. Summed up, the FIRE hypothesis is rejected and evidence for the prevalence of information rigidities is found across these variations, though the variance of the degree of information rigidities cannot be explained with the considered sticky and noisy information models (Coibion and Gorodnichenko 2015, pp. 2611 sqq.).
Furthermore, other authors using the same or a similar approach on other data obtain results which confirm the existence of information rigidities in the expectation
14
formation process. Interpreted within the sticky information model, other models es- timate information updates take place every quarter (Wieland 2013, p. 11), every six months (Dovern et al. 2015, p. 150) and every three to four quarter (Andrade and Bihan 2013, p. 975). The results of the examined paper are roughly in line with these findings.
4.2 Interpretation of Results
As shown until now, relatively robust evidence of the prevalence of informational rigidi- ties in the expectation formation process has been found. The authors used two theories – sticky and noisy information – to specify their new test and thus offer two different explanations to asses their findings. It should be noted here that these theories are not mutually exclusive and some intermediate form might be closer to reality. The choice within which theory’s context the results should be interpreted it therefore subject to considerations of plausibility.
4.3 Similar Approaches
This subsection similar approaches aiming to asses the nature of the expectation for- mation process will be presented and compared to the model by Coibion and Gorod- nichenko (2015). First, a preceding model by Coibion and Gorodnichenko (2012, pp. 138 sqq.) proposed model for inflation forecast settings. The model was specified in a way that inflationary and disinflationary shocks were regressed on average forecast errors. In an intuitive way the model built on the idea that the average forecast error (realization minus forecast) would have the same sign as the change of the forecasted variable due to the shock if information rigidities were prevailing. In the case of an inflationary shock, the change of the variable would not be anticipated by all agents and they would therefore under-react, and vice versa. This is exactly the same idea as in the new approach, but has the disadvantage that shocks have to be identified.
Further, Nordhaus (1987) proposed an early approach to test for informational effi- ciency by checking if forecast revisions are uncorrelated with past forecast revisions (of the same realization) and the contemporaneous forecast error. The results are qualita- tively in line with those of Coibion and Gorodnichenko (2015) and are interpreted as “forecast stickiness” (Nordhaus 1987, p. 670). Citing a possible reason from behavioral psychology, anchoring of forecasts is named. The idea to check for correlation among forecast revisions was adopted by Dovern et al. (2015, p. 146) who develop a spec- ification where the contemporaneous forecast revision is regressed on lagged forecast
15
revisions. Their findings in aggregate forecasts are in accordance with those of Coibion and Gorodnichenko (2015).
Last, Andrade and Bihan (2013, pp. 972 sqq.) propose a rather different model to quantify the degree of information rigidities. Exactly as the paper examined here, their approach is based on the implicit assumption that forecasts are updated every time new information is obtained. Using forecasts revisions as a proxy for information updates on the individual level, the average degree of information rigidity is calculated as the average probability of updating one’s information set from one period to the next one. This is analogous to saying that the fraction of agents who update their forecast is calculated. Their finding also is in line with that of Coibion and Gorodnichenko (2015).
4.4 Information Rigidities on the Individual Level
Even though, the authors assert that evidence for information rigidities is only observ- able in the model when aggregate data is used since the predictability of forecast errors is only apparent when examining consensus forecasts. In order to reconcile findings on the aggregate level with findings on the individual level, this subsection will deal with an according model.
The model by Bordalo et al. (forthcoming) is motivated by their finding that on average agents under-react to new information, as pointed out before, but over-react on the individual level in most cases. The persistence of this pattern is then explained by a newly developed model. In this model the authors turn away from Muth’s assumption that private information plays no role and allows agents to interpret new information differently due to private bits of information they possess (Bordalo et al. forthcoming, p. 19).This is comparable to noisy information models where agents receive the same information, but obtain different signals. Also, the authors introduce the idea of “repre- sentative states” which are outcomes whose likelihood increased the furthest when the new information became available and are therefore overweighed by agents in forming their expectations (Bordalo et al. forthcoming, p. 21).
This exaggeration of observable events leads to over-reaction on the individual level. Under certain conditions this is reconcilable with under-reaction in aggregate forecasts, namely when the over-reaction is not too strong and agents do not have access to the private information of others. In this case, under-reaction to the information that is available on average is still possible (Bordalo et al. forthcoming, p. 23).
16
5 Concluding Remarks
Summed up, the authors find relatively robust evidence against the null hypothesis of FIRE using a new approach. Their findings are interpreted within the framework of information rigidity models and aim to deliver stylized facts to utilize in calibrating macroeconomic models. Even though their results are qualitatively in line with similar studies, the variation of information rigidity estimates is too large, judged by the goal to deliver new facts. Especially the insights of studies that detect other reasons for the predictability of forecast errors on the individual level challenge these findings. The mechanisms that govern the expectation formation process and their interaction will prove to remain a fruitful field of research.