Journal of business & economic statistics.

Material type: TextTextSeries: ; Journal of business & economic statistics. Volume 41, No. 3, July 2023Publication details: Alexandria, VA : American Statistical Association, c2023.Description: various paging, 28 cmISSN:
  • 0735-0015
Subject(s):
Contents:
Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19.-- Using Survey Information for Improving the Density Nowcasting of U.S. GDP.-- Bootstrapping Two-Stage Quasi-Maximum Likelihood Estimators of Time Series Models.-- Identification and Estimation of Multinomial Choice Models with Latent Special Covariates.-- Forecasting with Economic News.-- Panel Data Quantile Regression for Treatment Effect Models.-- Testing for Unobserved Heterogeneity via k-means Clustering.-- Structural Breaks in Grouped Heterogeneity.-- Combining p-values for Multivariate Predictive Ability Testing.-- Estimation of Panel Data Models with Random Interactive Effects and Multiple Structural Breaks when T is Fixed.-- Culling the Herd of Moments with Penalized Empirical Likelihood.-- Network Gradient Descent Algorithm for Decentralized Federated Learning.-- Identification and Estimation of Structural VARMA Models Using Higher Order Dynamics.-- Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices.-- Detection of Multiple Structural Breaks in Large Covariance Matrices.-- A Robust Approach to Heteroscedasticity, Error Serial Correlation and Slope Heterogeneity in Linear Models with Interactive Effects for Large Panel Data.-- Tail Risk Inference via Expectiles in Heavy-Tailed Time Series.-- Large Hybrid Time-Varying Parameter VARs.-- Empirical Likelihood and Uniform Convergence Rates for Dyadic Kernel Density Estimation.-- Covariate-Assisted Community Detection in Multi-Layer Networks.-- Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities.-- Estimation of Leverage Effect: Kernel Function and Efficiency.-- Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates.-- News-Driven Uncertainty Fluctuations.-- Large-Scale Generalized Linear Models for Longitudinal Data with Grouped Patterns of Unobserved Heterogeneity.-- Can a Machine Correct Option Pricing Models?
Summary: Article : Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19. Abstract Dealing with structural breaks is an essential step in most empirical economic research. This is particularly true in panel data comprised of many cross-sectional units, which are all affected by major events. The COVID-19 pandemic has affected most sectors of the global economy; however, its impact on stock markets is still unclear. Most markets seem to have recovered while the pandemic is ongoing, suggesting that the relationship between stock returns and COVID-19 has been subject to structural break. It is therefore important to know if a structural break has occurred and, if it has, to infer the date of the break. Motivated by this last observation, the present article develops a new break detection toolbox that is applicable to different sized panels, easy to implement and robust to general forms of unobserved heterogeneity. The toolbox, which is the first of its kind, includes a structural change test, a break date estimator, and a break date confidence interval. Application to a panel covering 61 countries from January 3 to September 25, 2020, leads to the detection of a structural break that is dated to the first week of April. The effect of COVID-19 is negative before the break and zero thereafter, implying that while markets did react, the reaction was short-lived. A possible explanation is the quantitative easing programs announced by central banks all over the world in the second half of March.Summary: Article : Using Survey Information for Improving the Density Nowcasting of U.S. GDP. Abstract We provide a methodology that efficiently combines the statistical models of nowcasting with the survey information for improving the (density) nowcasting of U.S. real GDP. Specifically, we use the conventional dynamic factor model together with stochastic volatility components as the baseline statistical model. We augment the model with information from the survey expectations by aligning the first and second moments of the predictive distribution implied by this baseline model with those extracted from the survey information at various horizons. Results indicate that survey information bears valuable information over the baseline model for nowcasting GDP. While the mean survey predictions deliver valuable information during extreme events such as the Covid-19 pandemic, the variation in the survey participants’ predictions, often used as a measure of “ambiguity,” conveys crucial information beyond the mean of those predictions for capturing the tail behavior of the GDP distribution.Summary: Article : Bootstrapping Two-Stage Quasi-Maximum Likelihood Estimators of Time Series Models. Abstract This article provides results on the validity of bootstrap inference methods for two-stage quasi-maximum likelihood estimation involving time series data, such as those used for multivariate volatility models or copula-based models. Existing approaches require the researcher to compute and combine many first- and second-order derivatives, which can be difficult to do and is susceptible to error. Bootstrap methods are simpler to apply, allowing the substitution of capital (CPU cycles) for labor (keeping track of derivatives). We show the consistency of the bootstrap distribution and consistency of bootstrap variance estimators, thereby justifying the use of bootstrap percentile intervals and bootstrap standard errors.Summary: Article : Identification and Estimation of Multinomial Choice Models with Latent Special Covariates. Abstract Identification of multinomial choice models is often established by using special covariates that have full support. This article shows how these identification results can be extended to a large class of multinomial choice models when all covariates are bounded. I also provide a new n -consistent asymptotically normal estimator of the finite-dimensional parameters of the model.Summary: Article : Forecasting with Economic News. Abstract The goal of this article is to evaluate the informational content of sentiment extracted from news articles about the state of the economy. We propose a fine-grained aspect-based sentiment analysis that has two main characteristics: (a) we consider only the text in the article that is semantically dependent on a term of interest (aspect-based) and, (b) assign a sentiment score to each word based on a dictionary that we develop for applications in economics and finance (fine-grained). Our dataset includes six large U.S. newspapers, for a total of over 6.6 million articles and 4.2 billion words. Our findings suggest that several measures of economic sentiment track closely business cycle fluctuations and that they are relevant predictors for four major macroeconomic variables. We find that there are significant improvements in forecasting when sentiment is considered along with macroeconomic factors. In addition, we also find that sentiment matters to explains the tails of the probability distribution across several macroeconomic variables.Summary: Article : Panel Data Quantile Regression for Treatment Effect Models. Abstract In this study, we develop a novel estimation method for quantile treatment effects (QTE) under rank invariance and rank stationarity assumptions. Ishihara (Citation2020) explores identification of the nonseparable panel data model under these assumptions and proposes a parametric estimation based on the minimum distance method. However, when the dimensionality of the covariates is large, the minimum distance estimation using this process is computationally demanding. To overcome this problem, we propose a two-step estimation method based on the quantile regression and minimum distance methods. We then show the uniform asymptotic properties of our estimator and the validity of the nonparametric bootstrap. The Monte Carlo studies indicate that our estimator performs well in finite samples. Finally, we present two empirical illustrations, to estimate the distributional effects of insurance provision on household production and TV watching on child cognitive development.Summary: Article : Testing for Unobserved Heterogeneity via k-means Clustering. Abstract Clustering methods such as k-means have found widespread use in a variety of applications. This article proposes a split-sample testing procedure to determine whether a null hypothesis of a single cluster, indicating homogeneity of the data, can be rejected in favor of multiple clusters. The test is simple to implement, valid under mild conditions (including nonnormality, and heterogeneity of the data in aspects beyond those in the clustering analysis), and applicable in a range of contexts (including clustering when the time series dimension is small, or clustering on parameters other than the mean). We verify that the test has good size control in finite samples, and we illustrate the test in applications to clustering vehicle manufacturers and U.S. mutual funds.Summary: Article : Structural Breaks in Grouped Heterogeneity. Abstract Generating accurate forecasts in the presence of structural breaks requires careful management of bias-variance tradeoffs. Forecasting panel data under breaks offers the possibility to reduce parameter estimation error without inducing any bias if there exists a regime-specific pattern of grouped heterogeneity. To this end, we develop a new Bayesian methodology to estimate and formally test panel regression models in the presence of multiple breaks and unobserved regime-specific grouped heterogeneity. In an empirical application to forecasting inflation rates across 20 U.S. industries, our method generates significantly more accurate forecasts relative to a range of popular methods.Summary: Article : Combining p-values for Multivariate Predictive Ability Testing. In this article, we propose an intersection-union test for multivariate forecast accuracy based on the combination of a sequence of univariate tests. The testing framework evaluates a global null hypothesis of equal predictive ability using any number of univariate forecast accuracy tests under arbitrary dependence structures, without specifying the underlying multivariate distribution. An extensive Monte Carlo simulation exercise shows that our proposed test has very good size and power properties under several relevant scenarios, and performs well in both low- and high-dimensional settings. We illustrate the empirical validity of our testing procedure using a large dataset of 84 daily exchange rates running from January 1, 2011 to April 1, 2021. We show that our proposed test addresses inconclusive results that often arise in practice.Summary: Article : Estimation of Panel Data Models with Random Interactive Effects and Multiple Structural Breaks when T is Fixed. Abstract In this article, we propose a new estimator of panel data models with random interactive effects and multiple structural breaks that is suitable when the number of time periods, T, is fixed and only the number of cross-sectional units, N, is large. This is done by viewing the determination of the breaks as a shrinkage problem, and to estimate both the regression coefficients, and the number of breaks and their locations by applying a version of the Lasso approach. We show that with probability approaching one the approach can correctly determine the number of breaks and the dates of these breaks, and that the estimator of the regime-specific regression coefficients is consistent and asymptotically normal. We also provide Monte Carlo results suggesting that the approach performs very well in small samples, and empirical results suggesting that while the coefficients of the controls are breaking, the coefficients of the main deterrence regressors in a model of crime are not.Summary: Article : Culling the Herd of Moments with Penalized Empirical Likelihood. Abstract Models defined by moment conditions are at the center of structural econometric estimation, but economic theory is mostly agnostic about moment selection. While a large pool of valid moments can potentially improve estimation efficiency, in the meantime a few invalid ones may undermine consistency. This article investigates the empirical likelihood estimation of these moment-defined models in high-dimensional settings. We propose a penalized empirical likelihood (PEL) estimation and establish its oracle property with consistent detection of invalid moments. The PEL estimator is asymptotically normally distributed, and a projected PEL procedure further eliminates its asymptotic bias and provides more accurate normal approximation to the finite sample behavior. Simulation exercises demonstrate excellent numerical performance of these methods in estimation and inference.Summary: Article : Network Gradient Descent Algorithm for Decentralized Federated Learning. Abstract We study a fully decentralized federated learning algorithm, which is a novel gradient descent algorithm executed on a communication-based network. For convenience, we refer to it as a network gradient descent (NGD) method. In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy. Meanwhile, different clients communicate with each other directly according to a carefully designed network structure without a central master. This greatly enhances the reliability of the entire algorithm. Those nice properties inspire us to carefully study the NGD method both theoretically and numerically. Theoretically, we start with a classical linear regression model. We find that both the learning rate and the network structure play significant roles in determining the NGD estimator’s statistical efficiency. The resulting NGD estimator can be statistically as efficient as the global estimator, if the learning rate is sufficiently small and the network structure is weakly balanced, even if the data are distributed heterogeneously. Those interesting findings are then extended to general models and loss functions. Extensive numerical studies are presented to corroborate our theoretical findings. Classical deep learning models are also presented for illustration purpose.Summary: Article : Identification and Estimation of Structural VARMA Models Using Higher Order Dynamics. Abstract We use information from higher order moments to achieve identification of non-Gaussian structural vector autoregressive moving average (SVARMA) models, possibly nonfundamental or noncausal, through a frequency domain criterion based on higher order spectral densities. This allows us to identify the location of the roots of the determinantal lag matrix polynomials and to identify the rotation of the model errors leading to the structural shocks up to sign and permutation. We describe sufficient conditions for global and local parameter identification that rely on simple rank assumptions on the linear dynamics and on finite order serial and component independence conditions for the non-Gaussian structural innovations. We generalize previous univariate analysis to develop asymptotically normal and efficient estimates exploiting second and higher order cumulant dynamics given a particular structural shocks ordering without assumptions on causality or invertibility. Finite sample properties of estimates are explored with real and simulated data.Summary: Article : Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices. Abstract Realized covariance matrices are often constructed under the assumption that richness of intra-day return data is greater than the portfolio size, resulting in nonsingular matrix measures. However, when for example the portfolio size is large, assets suffer from illiquidity issues, or market microstructure noise deters sampling on very high frequencies, this relation is not guaranteed. Under these common conditions, realized covariance matrices may obtain as singular by construction. Motivated by this situation, we introduce the Singular Conditional Autoregressive Wishart (SCAW) model to capture the temporal dynamics of time series of singular realized covariance matrices, extending the rich literature on econometric Wishart time series models to the singular case. This model is furthermore developed by covariance targeting adapted to matrices and a sector wise BEKK-specification, allowing excellent scalability to large and extremely large portfolio sizes. Finally, the model is estimated to a 20-year long time series containing 50 stocks and to a 10-year long time series containing 300 stocks, and evaluated using out-of-sample forecast accuracy. It outperforms the benchmark models with high statistical significance and the parsimonious specifications perform better than the baseline SCAW model, while using considerably less parameters.Summary: Article : Detection of Multiple Structural Breaks in Large Covariance Matrices. ABSTRACT This article studies multiple structural breaks in large contemporaneous covariance matrices of high-dimensional time series satisfying an approximate factor model. The breaks in the second-order moment structure of the common components are due to sudden changes in either factor loadings or covariance of latent factors, requiring appropriate transformation of the factor models to facilitate estimation of the (transformed) common factors and factor loadings via the classical principal component analysis. With the estimated factors and idiosyncratic errors, an easy-to-implement CUSUM-based detection technique is introduced to consistently estimate the location and number of breaks and correctly identify whether they originate in the common or idiosyncratic error components. The algorithms of Wild Binary Segmentation for Covariance (WBS-Cov) and Wild Sparsified Binary Segmentation for Covariance (WSBS-Cov) are used to estimate breaks in the common and idiosyncratic error components, respectively. Under some technical conditions, the asymptotic properties of the proposed methodology are derived with near-optimal rates (up to a logarithmic factor) achieved for the estimated breaks. Monte Carlo simulation studies are conducted to examine the finite-sample performance of the developed method and its comparison with other existing approaches. We finally apply our method to study the contemporaneous covariance structure of daily returns of S&P 500 constituents and identify a few breaks including those occurring during the 2007–2008 financial crisis and the recent coronavirus (COVID-19) outbreak. An R package “ BSCOV ” is provided to implement the proposed algorithms.Summary: Article : A Robust Approach to Heteroscedasticity, Error Serial Correlation and Slope Heterogeneity in Linear Models with Interactive Effects for Large Panel Data. Abstract In this article, we propose a robust approach against heteroscedasticity, error serial correlation and slope heterogeneity in linear models with interactive effects for large panel data. First, consistency and asymptotic normality of the pooled iterated principal component (IPC) estimator for random coefficient and homogeneous slope models are established. Then, we prove the asymptotic validity of the associated Wald test for slope parameter restrictions based on the panel heteroscedasticity and autocorrelation consistent (PHAC) variance matrix estimator for both random coefficient and homogeneous slope models, which does not require the Newey-West type time-series parameter truncation. These results asymptotically justify the use of the same pooled IPC estimator and the PHAC standard error for both homogeneous-slope and heterogeneous-slope models. This robust approach can significantly reduce the model selection uncertainty for applied researchers. In addition, we propose a Lagrange Multiplier (LM) test for correlated random coefficients with covariates. This test has nontrivial power against correlated random coefficients, but not for random coefficients and homogeneous slopes. The LM test is important because the IPC estimator becomes inconsistent with correlated random coefficients. The finite sample evidence and an empirical application support the reliability and the usefulness of our robust approach.Summary: Article : Tail Risk Inference via Expectiles in Heavy-Tailed Time Series. Abstract Expectiles define the only law-invariant, coherent and elicitable risk measure apart from the expectation. The popularity of expectile-based risk measures is steadily growing and their properties have been studied for independent data, but further results are needed to establish that extreme expectiles can be applied with the kind of dependent time series models relevant to finance. In this article we provide a basis for inference on extreme expectiles and expectile-based marginal expected shortfall in a general β-mixing context that encompasses ARMA and GARCH models with heavy-tailed innovations. Our methods allow the estimation of marginal (pertaining to the stationary distribution) and dynamic (conditional on the past) extreme expectile-based risk measures. Simulations and applications to financial returns show that the new estimators and confidence intervals greatly improve on existing ones when the data are dependent.Summary: Article : Large Hybrid Time-Varying Parameter VARs. Abstract Time-varying parameter VARs with stochastic volatility are routinely used for structural analysis and forecasting in settings involving a few endogenous variables. Applying these models to high-dimensional datasets has proved to be challenging due to intensive computations and over-parameterization concerns. We develop an efficient Bayesian sparsification method for a class of models we call hybrid TVP-VARs—VARs with time-varying parameters in some equations but constant coefficients in others. Specifically, for each equation, the new method automatically decides whether the VAR coefficients and contemporaneous relations among variables are constant or time-varying. Using U.S. datasets of various dimensions, we find evidence that the parameters in some, but not all, equations are time varying. The large hybrid TVP-VAR also forecasts better than many standard benchmarks.Summary: Article : Empirical Likelihood and Uniform Convergence Rates for Dyadic Kernel Density Estimation. ABSTRACT This article studies the asymptotic properties of and alternative inference methods for kernel density estimation (KDE) for dyadic data. We first establish uniform convergence rates for dyadic KDE. Second, we propose a modified jackknife empirical likelihood procedure for inference. The proposed test statistic is asymptotically pivotal regardless of presence of dyadic clustering. The results are further extended to cover the practically relevant case of incomplete dyadic data. Simulations show that this modified jackknife empirical likelihood-based inference procedure delivers precise coverage probabilities even with modest sample sizes and with incomplete dyadic data. Finally, we illustrate the method by studying airport congestion in the United States.Summary: Article : Covariate-Assisted Community Detection in Multi-Layer Networks. ABSTRACT Communities in multi-layer networks consist of nodes with similar connectivity patterns across all layers. This article proposes a tensor-based community detection method in multi-layer networks, which leverages available node-wise covariates to improve community detection accuracy. This is motivated by the network homophily principle, which suggests that nodes with similar covariates tend to reside in the same community. To take advantage of the node-wise covariates, the proposed method augments the multi-layer network with an additional layer constructed from the node similarity matrix with proper scaling, and conducts a Tucker decomposition of the augmented multi-layer network, yielding the spectral embedding vector of each node for community detection. Asymptotic consistencies of the proposed method in terms of community detection are established, which are also supported by numerical experiments on various synthetic networks and two real-life multi-layer networks.Summary: Article : Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities. Abstract This article describes three methods for carrying out nonasymptotic inference on partially identified parameters that are solutions to a class of optimization problems. Applications in which the optimization problems arise include estimation under shape restrictions, estimation of models of discrete games, and estimation based on grouped data. The partially identified parameters are characterized by restrictions that involve the unknown population means of observed random variables in addition to structural parameters. Inference consists of finding confidence intervals for functions of the structural parameters. Our theory provides finite-sample lower bounds on the coverage probabilities of the confidence intervals under three sets of assumptions of increasing strength. With the moderate sample sizes found in most economics applications, the bounds become tighter as the assumptions strengthen. We discuss estimation of population parameters that the bounds depend on and contrast our methods with alternative methods for obtaining confidence intervals for partially identified parameters. The results of Monte Carlo experiments and empirical examples illustrate the usefulness of our method.Summary: Article : Estimation of Leverage Effect: Kernel Function and Efficiency. Abstract This article proposes more efficient estimators for the leverage effect than the existing ones. The idea is to allow for nonuniform kernel functions in the spot volatility estimates or the aggregated returns. This finding highlights a critical difference between the leverage effect and integrated volatility functionals, where the uniform kernel is optimal. Another distinction between these two cases is that the overlapping estimators of the leverage effect are more efficient than the nonoverlapping ones. We offer two perspectives to explain these differences: one is based on the “effective kernel” and the other on the correlation structure of the nonoverlapping estimators. The simulation study shows that the proposed estimator with a nonuniform kernel substantially increases the estimation efficiency and testing power relative to the existing ones.Summary: Article : Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates. Abstract In this article, we propose a robust signal recovery method for high-dimensional linear log-contrast models, when the error distribution could be heavy-tailed and asymmetric. The proposed method is built on the Huber loss with ℓ 1 penalization. We establish the ℓ 1 and ℓ 2 consistency for the resulting estimator. Under conditions analogous to the irrepresentability condition and the minimum signal strength condition, we prove that the signed support of the slope parameter vector can be recovered with high probability. The finite-sample behavior of the proposed method is evaluated through simulation studies, and applications to a GDP satisfaction dataset an HIV microbiome dataset are provided.Summary: Article : News-Driven Uncertainty Fluctuations. Abstract We investigate the channels through which news influences the subjective beliefs of economic agents, with a particular focus on their subjective uncertainty. The main insight of the article is that news that is more at odds with agents’ prior beliefs generates an increase in uncertainty; news that is more consistent with their prior beliefs generates a decrease in uncertainty. We illustrate this insight theoretically and then estimate the model empirically using data on U.S. output and professional forecasts to provide novel measures of news shocks and uncertainty. We then estimate impulse responses from the identified shocks to show that news shocks can affect macroeconomic variables in ways that resemble the effects of uncertainty shocks. Our results suggest that controlling for news can potentially diminish the estimated effects of uncertainty shocks on real variables, particularly at longer horizons.Summary: Article : Large-Scale Generalized Linear Models for Longitudinal Data with Grouped Patterns of Unobserved Heterogeneity. ABSTRACT This article provides methods for flexibly capturing unobservable heterogeneity from longitudinal data in the context of an exponential family of distributions. The group memberships of individual units are left unspecified, and their heterogeneity is influenced by group-specific unobservable factor structures. The model includes, as special cases, probit, logit, and Poisson regressions with interactive fixed effects along with unknown group membership. We discuss a computationally efficient estimation method and derive the corresponding asymptotic theory. Uniform consistency of the estimated group membership is established. To test heterogeneous regression coefficients within groups, we propose a Swamy-type test that allows for unobserved heterogeneity. We apply the proposed method to the study of market structure of the taxi industry in New York City. Our method unveils interesting and important insights from large-scale longitudinal data that consist of over 450 million data points.Summary: Article : Can a Machine Correct Option Pricing Models?Abstract We introduce a novel two-step approach to predict implied volatility surfaces. Given any fitted parametric option pricing model, we train a feedforward neural network on the model-implied pricing errors to correct for mispricing and boost performance. Using a large dataset of S&P 500 options, we test our nonparametric correction on several parametric models ranging from ad-hoc Black–Scholes to structural stochastic volatility models and demonstrate the boosted performance for each model. Out-of-sample prediction exercises in the cross-section and in the option panel show that machine-corrected models always outperform their respective original ones, often by a large extent. Our method is relatively indiscriminate, bringing pricing errors down to a similar magnitude regardless of the misspecification of the original parametric model. Even so, correcting models that are less misspecified usually leads to additional improvements in performance and also outperforms a neural network fitted directly to the implied volatility surface.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Home library Collection Shelving location Call number Copy number Status Date due Barcode
Serials Serials NU BALIWAG NU BALIWAG Serials Serials Journal of business & economic statistics. Volume 41, No. 3, July 2023 (Browse shelf(Opens below)) c.1 Not for loan NUBJ/M000130

Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19.-- Using Survey Information for Improving the Density Nowcasting of U.S. GDP.-- Bootstrapping Two-Stage Quasi-Maximum Likelihood Estimators of Time Series Models.-- Identification and Estimation of Multinomial Choice Models with Latent Special Covariates.-- Forecasting with Economic News.-- Panel Data Quantile Regression for Treatment Effect Models.-- Testing for Unobserved Heterogeneity via k-means Clustering.-- Structural Breaks in Grouped Heterogeneity.-- Combining p-values for Multivariate Predictive Ability Testing.-- Estimation of Panel Data Models with Random Interactive Effects and Multiple Structural Breaks when T is Fixed.-- Culling the Herd of Moments with Penalized Empirical Likelihood.-- Network Gradient Descent Algorithm for Decentralized Federated Learning.-- Identification and Estimation of Structural VARMA Models Using Higher Order Dynamics.-- Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices.-- Detection of Multiple Structural Breaks in Large Covariance Matrices.-- A Robust Approach to Heteroscedasticity, Error Serial Correlation and Slope Heterogeneity in Linear Models with Interactive Effects for Large Panel Data.-- Tail Risk Inference via Expectiles in Heavy-Tailed Time Series.-- Large Hybrid Time-Varying Parameter VARs.-- Empirical Likelihood and Uniform Convergence Rates for Dyadic Kernel Density Estimation.-- Covariate-Assisted Community Detection in Multi-Layer Networks.-- Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities.-- Estimation of Leverage Effect: Kernel Function and Efficiency.-- Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates.-- News-Driven Uncertainty Fluctuations.-- Large-Scale Generalized Linear Models for Longitudinal Data with Grouped Patterns of Unobserved Heterogeneity.-- Can a Machine Correct Option Pricing Models?

Article : Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19. Abstract
Dealing with structural breaks is an essential step in most empirical economic research. This is particularly true in panel data comprised of many cross-sectional units, which are all affected by major events. The COVID-19 pandemic has affected most sectors of the global economy; however, its impact on stock markets is still unclear. Most markets seem to have recovered while the pandemic is ongoing, suggesting that the relationship between stock returns and COVID-19 has been subject to structural break. It is therefore important to know if a structural break has occurred and, if it has, to infer the date of the break. Motivated by this last observation, the present article develops a new break detection toolbox that is applicable to different sized panels, easy to implement and robust to general forms of unobserved heterogeneity. The toolbox, which is the first of its kind, includes a structural change test, a break date estimator, and a break date confidence interval. Application to a panel covering 61 countries from January 3 to September 25, 2020, leads to the detection of a structural break that is dated to the first week of April. The effect of COVID-19 is negative before the break and zero thereafter, implying that while markets did react, the reaction was short-lived. A possible explanation is the quantitative easing programs announced by central banks all over the world in the second half of March.

Article : Using Survey Information for Improving the Density Nowcasting of U.S. GDP. Abstract
We provide a methodology that efficiently combines the statistical models of nowcasting with the survey information for improving the (density) nowcasting of U.S. real GDP. Specifically, we use the conventional dynamic factor model together with stochastic volatility components as the baseline statistical model. We augment the model with information from the survey expectations by aligning the first and second moments of the predictive distribution implied by this baseline model with those extracted from the survey information at various horizons. Results indicate that survey information bears valuable information over the baseline model for nowcasting GDP. While the mean survey predictions deliver valuable information during extreme events such as the Covid-19 pandemic, the variation in the survey participants’ predictions, often used as a measure of “ambiguity,” conveys crucial information beyond the mean of those predictions for capturing the tail behavior of the GDP distribution.

Article : Bootstrapping Two-Stage Quasi-Maximum Likelihood Estimators of Time Series Models. Abstract
This article provides results on the validity of bootstrap inference methods for two-stage quasi-maximum likelihood estimation involving time series data, such as those used for multivariate volatility models or copula-based models. Existing approaches require the researcher to compute and combine many first- and second-order derivatives, which can be difficult to do and is susceptible to error. Bootstrap methods are simpler to apply, allowing the substitution of capital (CPU cycles) for labor (keeping track of derivatives). We show the consistency of the bootstrap distribution and consistency of bootstrap variance estimators, thereby justifying the use of bootstrap percentile intervals and bootstrap standard errors.

Article : Identification and Estimation of Multinomial Choice Models with Latent Special Covariates. Abstract
Identification of multinomial choice models is often established by using special covariates that have full support. This article shows how these identification results can be extended to a large class of multinomial choice models when all covariates are bounded. I also provide a new
n
-consistent asymptotically normal estimator of the finite-dimensional parameters of the model.

Article : Forecasting with Economic News. Abstract
The goal of this article is to evaluate the informational content of sentiment extracted from news articles about the state of the economy. We propose a fine-grained aspect-based sentiment analysis that has two main characteristics: (a) we consider only the text in the article that is semantically dependent on a term of interest (aspect-based) and, (b) assign a sentiment score to each word based on a dictionary that we develop for applications in economics and finance (fine-grained). Our dataset includes six large U.S. newspapers, for a total of over 6.6 million articles and 4.2 billion words. Our findings suggest that several measures of economic sentiment track closely business cycle fluctuations and that they are relevant predictors for four major macroeconomic variables. We find that there are significant improvements in forecasting when sentiment is considered along with macroeconomic factors. In addition, we also find that sentiment matters to explains the tails of the probability distribution across several macroeconomic variables.

Article : Panel Data Quantile Regression for Treatment Effect Models. Abstract
In this study, we develop a novel estimation method for quantile treatment effects (QTE) under rank invariance and rank stationarity assumptions. Ishihara (Citation2020) explores identification of the nonseparable panel data model under these assumptions and proposes a parametric estimation based on the minimum distance method. However, when the dimensionality of the covariates is large, the minimum distance estimation using this process is computationally demanding. To overcome this problem, we propose a two-step estimation method based on the quantile regression and minimum distance methods. We then show the uniform asymptotic properties of our estimator and the validity of the nonparametric bootstrap. The Monte Carlo studies indicate that our estimator performs well in finite samples. Finally, we present two empirical illustrations, to estimate the distributional effects of insurance provision on household production and TV watching on child cognitive development.

Article : Testing for Unobserved Heterogeneity via k-means Clustering. Abstract
Clustering methods such as k-means have found widespread use in a variety of applications. This article proposes a split-sample testing procedure to determine whether a null hypothesis of a single cluster, indicating homogeneity of the data, can be rejected in favor of multiple clusters. The test is simple to implement, valid under mild conditions (including nonnormality, and heterogeneity of the data in aspects beyond those in the clustering analysis), and applicable in a range of contexts (including clustering when the time series dimension is small, or clustering on parameters other than the mean). We verify that the test has good size control in finite samples, and we illustrate the test in applications to clustering vehicle manufacturers and U.S. mutual funds.

Article : Structural Breaks in Grouped Heterogeneity. Abstract
Generating accurate forecasts in the presence of structural breaks requires careful management of bias-variance tradeoffs. Forecasting panel data under breaks offers the possibility to reduce parameter estimation error without inducing any bias if there exists a regime-specific pattern of grouped heterogeneity. To this end, we develop a new Bayesian methodology to estimate and formally test panel regression models in the presence of multiple breaks and unobserved regime-specific grouped heterogeneity. In an empirical application to forecasting inflation rates across 20 U.S. industries, our method generates significantly more accurate forecasts relative to a range of popular methods.

Article : Combining p-values for Multivariate Predictive Ability Testing. In this article, we propose an intersection-union test for multivariate forecast accuracy based on the combination of a sequence of univariate tests. The testing framework evaluates a global null hypothesis of equal predictive ability using any number of univariate forecast accuracy tests under arbitrary dependence structures, without specifying the underlying multivariate distribution. An extensive Monte Carlo simulation exercise shows that our proposed test has very good size and power properties under several relevant scenarios, and performs well in both low- and high-dimensional settings. We illustrate the empirical validity of our testing procedure using a large dataset of 84 daily exchange rates running from January 1, 2011 to April 1, 2021. We show that our proposed test addresses inconclusive results that often arise in practice.

Article : Estimation of Panel Data Models with Random Interactive Effects and Multiple Structural Breaks when T is Fixed. Abstract
In this article, we propose a new estimator of panel data models with random interactive effects and multiple structural breaks that is suitable when the number of time periods, T, is fixed and only the number of cross-sectional units, N, is large. This is done by viewing the determination of the breaks as a shrinkage problem, and to estimate both the regression coefficients, and the number of breaks and their locations by applying a version of the Lasso approach. We show that with probability approaching one the approach can correctly determine the number of breaks and the dates of these breaks, and that the estimator of the regime-specific regression coefficients is consistent and asymptotically normal. We also provide Monte Carlo results suggesting that the approach performs very well in small samples, and empirical results suggesting that while the coefficients of the controls are breaking, the coefficients of the main deterrence regressors in a model of crime are not.

Article : Culling the Herd of Moments with Penalized Empirical Likelihood. Abstract
Models defined by moment conditions are at the center of structural econometric estimation, but economic theory is mostly agnostic about moment selection. While a large pool of valid moments can potentially improve estimation efficiency, in the meantime a few invalid ones may undermine consistency. This article investigates the empirical likelihood estimation of these moment-defined models in high-dimensional settings. We propose a penalized empirical likelihood (PEL) estimation and establish its oracle property with consistent detection of invalid moments. The PEL estimator is asymptotically normally distributed, and a projected PEL procedure further eliminates its asymptotic bias and provides more accurate normal approximation to the finite sample behavior. Simulation exercises demonstrate excellent numerical performance of these methods in estimation and inference.

Article : Network Gradient Descent Algorithm for Decentralized Federated Learning. Abstract
We study a fully decentralized federated learning algorithm, which is a novel gradient descent algorithm executed on a communication-based network. For convenience, we refer to it as a network gradient descent (NGD) method. In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy. Meanwhile, different clients communicate with each other directly according to a carefully designed network structure without a central master. This greatly enhances the reliability of the entire algorithm. Those nice properties inspire us to carefully study the NGD method both theoretically and numerically. Theoretically, we start with a classical linear regression model. We find that both the learning rate and the network structure play significant roles in determining the NGD estimator’s statistical efficiency. The resulting NGD estimator can be statistically as efficient as the global estimator, if the learning rate is sufficiently small and the network structure is weakly balanced, even if the data are distributed heterogeneously. Those interesting findings are then extended to general models and loss functions. Extensive numerical studies are presented to corroborate our theoretical findings. Classical deep learning models are also presented for illustration purpose.

Article : Identification and Estimation of Structural VARMA Models Using Higher Order Dynamics. Abstract
We use information from higher order moments to achieve identification of non-Gaussian structural vector autoregressive moving average (SVARMA) models, possibly nonfundamental or noncausal, through a frequency domain criterion based on higher order spectral densities. This allows us to identify the location of the roots of the determinantal lag matrix polynomials and to identify the rotation of the model errors leading to the structural shocks up to sign and permutation. We describe sufficient conditions for global and local parameter identification that rely on simple rank assumptions on the linear dynamics and on finite order serial and component independence conditions for the non-Gaussian structural innovations. We generalize previous univariate analysis to develop asymptotically normal and efficient estimates exploiting second and higher order cumulant dynamics given a particular structural shocks ordering without assumptions on causality or invertibility. Finite sample properties of estimates are explored with real and simulated data.

Article : Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices. Abstract
Realized covariance matrices are often constructed under the assumption that richness of intra-day return data is greater than the portfolio size, resulting in nonsingular matrix measures. However, when for example the portfolio size is large, assets suffer from illiquidity issues, or market microstructure noise deters sampling on very high frequencies, this relation is not guaranteed. Under these common conditions, realized covariance matrices may obtain as singular by construction. Motivated by this situation, we introduce the Singular Conditional Autoregressive Wishart (SCAW) model to capture the temporal dynamics of time series of singular realized covariance matrices, extending the rich literature on econometric Wishart time series models to the singular case. This model is furthermore developed by covariance targeting adapted to matrices and a sector wise BEKK-specification, allowing excellent scalability to large and extremely large portfolio sizes. Finally, the model is estimated to a 20-year long time series containing 50 stocks and to a 10-year long time series containing 300 stocks, and evaluated using out-of-sample forecast accuracy. It outperforms the benchmark models with high statistical significance and the parsimonious specifications perform better than the baseline SCAW model, while using considerably less parameters.

Article : Detection of Multiple Structural Breaks in Large Covariance Matrices. ABSTRACT
This article studies multiple structural breaks in large contemporaneous covariance matrices of high-dimensional time series satisfying an approximate factor model. The breaks in the second-order moment structure of the common components are due to sudden changes in either factor loadings or covariance of latent factors, requiring appropriate transformation of the factor models to facilitate estimation of the (transformed) common factors and factor loadings via the classical principal component analysis. With the estimated factors and idiosyncratic errors, an easy-to-implement CUSUM-based detection technique is introduced to consistently estimate the location and number of breaks and correctly identify whether they originate in the common or idiosyncratic error components. The algorithms of Wild Binary Segmentation for Covariance (WBS-Cov) and Wild Sparsified Binary Segmentation for Covariance (WSBS-Cov) are used to estimate breaks in the common and idiosyncratic error components, respectively. Under some technical conditions, the asymptotic properties of the proposed methodology are derived with near-optimal rates (up to a logarithmic factor) achieved for the estimated breaks. Monte Carlo simulation studies are conducted to examine the finite-sample performance of the developed method and its comparison with other existing approaches. We finally apply our method to study the contemporaneous covariance structure of daily returns of S&P 500 constituents and identify a few breaks including those occurring during the 2007–2008 financial crisis and the recent coronavirus (COVID-19) outbreak. An
R
package “
BSCOV
” is provided to implement the proposed algorithms.

Article : A Robust Approach to Heteroscedasticity, Error Serial Correlation and Slope Heterogeneity in Linear Models with Interactive Effects for Large Panel Data. Abstract
In this article, we propose a robust approach against heteroscedasticity, error serial correlation and slope heterogeneity in linear models with interactive effects for large panel data. First, consistency and asymptotic normality of the pooled iterated principal component (IPC) estimator for random coefficient and homogeneous slope models are established. Then, we prove the asymptotic validity of the associated Wald test for slope parameter restrictions based on the panel heteroscedasticity and autocorrelation consistent (PHAC) variance matrix estimator for both random coefficient and homogeneous slope models, which does not require the Newey-West type time-series parameter truncation. These results asymptotically justify the use of the same pooled IPC estimator and the PHAC standard error for both homogeneous-slope and heterogeneous-slope models. This robust approach can significantly reduce the model selection uncertainty for applied researchers. In addition, we propose a Lagrange Multiplier (LM) test for correlated random coefficients with covariates. This test has nontrivial power against correlated random coefficients, but not for random coefficients and homogeneous slopes. The LM test is important because the IPC estimator becomes inconsistent with correlated random coefficients. The finite sample evidence and an empirical application support the reliability and the usefulness of our robust approach.

Article : Tail Risk Inference via Expectiles in Heavy-Tailed Time Series. Abstract
Expectiles define the only law-invariant, coherent and elicitable risk measure apart from the expectation. The popularity of expectile-based risk measures is steadily growing and their properties have been studied for independent data, but further results are needed to establish that extreme expectiles can be applied with the kind of dependent time series models relevant to finance. In this article we provide a basis for inference on extreme expectiles and expectile-based marginal expected shortfall in a general β-mixing context that encompasses ARMA and GARCH models with heavy-tailed innovations. Our methods allow the estimation of marginal (pertaining to the stationary distribution) and dynamic (conditional on the past) extreme expectile-based risk measures. Simulations and applications to financial returns show that the new estimators and confidence intervals greatly improve on existing ones when the data are dependent.

Article : Large Hybrid Time-Varying Parameter VARs. Abstract
Time-varying parameter VARs with stochastic volatility are routinely used for structural analysis and forecasting in settings involving a few endogenous variables. Applying these models to high-dimensional datasets has proved to be challenging due to intensive computations and over-parameterization concerns. We develop an efficient Bayesian sparsification method for a class of models we call hybrid TVP-VARs—VARs with time-varying parameters in some equations but constant coefficients in others. Specifically, for each equation, the new method automatically decides whether the VAR coefficients and contemporaneous relations among variables are constant or time-varying. Using U.S. datasets of various dimensions, we find evidence that the parameters in some, but not all, equations are time varying. The large hybrid TVP-VAR also forecasts better than many standard benchmarks.

Article : Empirical Likelihood and Uniform Convergence Rates for Dyadic Kernel Density Estimation. ABSTRACT
This article studies the asymptotic properties of and alternative inference methods for kernel density estimation (KDE) for dyadic data. We first establish uniform convergence rates for dyadic KDE. Second, we propose a modified jackknife empirical likelihood procedure for inference. The proposed test statistic is asymptotically pivotal regardless of presence of dyadic clustering. The results are further extended to cover the practically relevant case of incomplete dyadic data. Simulations show that this modified jackknife empirical likelihood-based inference procedure delivers precise coverage probabilities even with modest sample sizes and with incomplete dyadic data. Finally, we illustrate the method by studying airport congestion in the United States.

Article : Covariate-Assisted Community Detection in Multi-Layer Networks. ABSTRACT
Communities in multi-layer networks consist of nodes with similar connectivity patterns across all layers. This article proposes a tensor-based community detection method in multi-layer networks, which leverages available node-wise covariates to improve community detection accuracy. This is motivated by the network homophily principle, which suggests that nodes with similar covariates tend to reside in the same community. To take advantage of the node-wise covariates, the proposed method augments the multi-layer network with an additional layer constructed from the node similarity matrix with proper scaling, and conducts a Tucker decomposition of the augmented multi-layer network, yielding the spectral embedding vector of each node for community detection. Asymptotic consistencies of the proposed method in terms of community detection are established, which are also supported by numerical experiments on various synthetic networks and two real-life multi-layer networks.

Article : Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities. Abstract
This article describes three methods for carrying out nonasymptotic inference on partially identified parameters that are solutions to a class of optimization problems. Applications in which the optimization problems arise include estimation under shape restrictions, estimation of models of discrete games, and estimation based on grouped data. The partially identified parameters are characterized by restrictions that involve the unknown population means of observed random variables in addition to structural parameters. Inference consists of finding confidence intervals for functions of the structural parameters. Our theory provides finite-sample lower bounds on the coverage probabilities of the confidence intervals under three sets of assumptions of increasing strength. With the moderate sample sizes found in most economics applications, the bounds become tighter as the assumptions strengthen. We discuss estimation of population parameters that the bounds depend on and contrast our methods with alternative methods for obtaining confidence intervals for partially identified parameters. The results of Monte Carlo experiments and empirical examples illustrate the usefulness of our method.

Article : Estimation of Leverage Effect: Kernel Function and Efficiency. Abstract
This article proposes more efficient estimators for the leverage effect than the existing ones. The idea is to allow for nonuniform kernel functions in the spot volatility estimates or the aggregated returns. This finding highlights a critical difference between the leverage effect and integrated volatility functionals, where the uniform kernel is optimal. Another distinction between these two cases is that the overlapping estimators of the leverage effect are more efficient than the nonoverlapping ones. We offer two perspectives to explain these differences: one is based on the “effective kernel” and the other on the correlation structure of the nonoverlapping estimators. The simulation study shows that the proposed estimator with a nonuniform kernel substantially increases the estimation efficiency and testing power relative to the existing ones.

Article : Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates. Abstract
In this article, we propose a robust signal recovery method for high-dimensional linear log-contrast models, when the error distribution could be heavy-tailed and asymmetric. The proposed method is built on the Huber loss with

1
penalization. We establish the

1
and

2
consistency for the resulting estimator. Under conditions analogous to the irrepresentability condition and the minimum signal strength condition, we prove that the signed support of the slope parameter vector can be recovered with high probability. The finite-sample behavior of the proposed method is evaluated through simulation studies, and applications to a GDP satisfaction dataset an HIV microbiome dataset are provided.

Article : News-Driven Uncertainty Fluctuations. Abstract
We investigate the channels through which news influences the subjective beliefs of economic agents, with a particular focus on their subjective uncertainty. The main insight of the article is that news that is more at odds with agents’ prior beliefs generates an increase in uncertainty; news that is more consistent with their prior beliefs generates a decrease in uncertainty. We illustrate this insight theoretically and then estimate the model empirically using data on U.S. output and professional forecasts to provide novel measures of news shocks and uncertainty. We then estimate impulse responses from the identified shocks to show that news shocks can affect macroeconomic variables in ways that resemble the effects of uncertainty shocks. Our results suggest that controlling for news can potentially diminish the estimated effects of uncertainty shocks on real variables, particularly at longer horizons.

Article : Large-Scale Generalized Linear Models for Longitudinal Data with Grouped Patterns of Unobserved Heterogeneity. ABSTRACT
This article provides methods for flexibly capturing unobservable heterogeneity from longitudinal data in the context of an exponential family of distributions. The group memberships of individual units are left unspecified, and their heterogeneity is influenced by group-specific unobservable factor structures. The model includes, as special cases, probit, logit, and Poisson regressions with interactive fixed effects along with unknown group membership. We discuss a computationally efficient estimation method and derive the corresponding asymptotic theory. Uniform consistency of the estimated group membership is established. To test heterogeneous regression coefficients within groups, we propose a Swamy-type test that allows for unobserved heterogeneity. We apply the proposed method to the study of market structure of the taxi industry in New York City. Our method unveils interesting and important insights from large-scale longitudinal data that consist of over 450 million data points.

Article : Can a Machine Correct Option Pricing Models?Abstract
We introduce a novel two-step approach to predict implied volatility surfaces. Given any fitted parametric option pricing model, we train a feedforward neural network on the model-implied pricing errors to correct for mispricing and boost performance. Using a large dataset of S&P 500 options, we test our nonparametric correction on several parametric models ranging from ad-hoc Black–Scholes to structural stochastic volatility models and demonstrate the boosted performance for each model. Out-of-sample prediction exercises in the cross-section and in the option panel show that machine-corrected models always outperform their respective original ones, often by a large extent. Our method is relatively indiscriminate, bringing pricing errors down to a similar magnitude regardless of the misspecification of the original parametric model. Even so, correcting models that are less misspecified usually leads to additional improvements in performance and also outperforms a neural network fitted directly to the implied volatility surface.

There are no comments on this title.

to post a comment.

© 2023 NU LIBRARY BALIWAG. All rights reserved. Privacy Policy I Powered by: KOHA