Working Paper: CEPR ID: DP10034
Authors: Eric Ghysels
Abstract: We consider estimating volatility risk factors using large panels of filtered or realized volatilities. The data structure involves three types of asymptotic expansions. There is the cross-section of volatility estimates at each point in time, namely i = 1,?, N observed at dates t = 1,?, T: In addition to expanding N and T; we also have the sampling frequency h of the data used to compute the volatility estimates which rely on data collected at increasing frequency, h ? 0: The continuous record or in-fill asymptotics (h ? 0) allows us to control the cross-sectional and serial correlation among the idiosyncratic errors of the panel. A remarkable result emerges. Under suitable regularity conditions the traditional principal component analysis yields super-consistent estimates of the factors at each point in time. Namely, contrary to the root-N standard normal consistency we find N-consistency, also standard normal, due to the fact that the high frequency sampling scheme is tied to the size of the cross-section, boosting the rate of convergence. We also show that standard cross-sectional driven criteria suffice for consistent estimation of the number of factors, which is different from the traditional panel data results. Finally, we also show that the panel data estimates improve upon the individual volatility estimates.
Keywords: ARCH-type filters; Principal component analysis; Realized volatility
JEL Codes: C13; C33
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
high-frequency sampling scheme (C58) | size of the cross-section (C21) |
number of assets (n) increases (E01) | convergence rate improves (O47) |
sampling frequency (h) approaches zero (C69) | convergence rate improves (O47) |
panel data model (C23) | reliability of the volatility proxies (C58) |
number of factors (C39) | underlying data structure (Y10) |