Journal of business & economic statistics.
Material type:
- 0735-0015
Item type | Current library | Home library | Collection | Shelving location | Call number | Copy number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|---|---|---|
![]() |
NU BALIWAG | NU BALIWAG | Serials | Serials | Journal of business & economic statistics. Volume 42, No. 2, April 2024 (Browse shelf(Opens below)) | c.1 | Not for loan | NUBJ/M000254 |
Introduction to the special issue on statistics dynamic networks.-- Modeling functional time series and mixed-type predictors with partially functional autoregressions.-- Dynamic peer groups of arbitrage characteristics.-- Monitoring network changes in social media.-- Dynamic network quantile regression model.-- Large spillover networks of nonstationary systems.-- A Time-varing network for cryptocurremcies.-- Testing for global covariate effects in dynamics interaction event networks.-- Estimation of matrix exponential unbalanced panel data models with fixed effects : an application to US outward FDI stock.-- Backtesting systemic risk forecasts usign multi-objective elicitability.-- Bonferroni type tests for return predictability and the initial condition.-- Jumps or staleness?.-- Estimation and inference on time-varying FAVAR models.-- The Leverage effect puzzle under semi-nonparametric stochastic volatility models.-- Simple inference on functionals of set-identified parameters defined by Linear moments.-- Links and legibility : making sense of historical U.S. census automated linking methods.-- Estimating a continous treatment model with spillovers : a control function approach.-- Neural networks for partially linear quantile regression.-- Tie-break bootstrap for nonparametric rank statistics.-- Bootstrap inference for panel data quantile regression.--Adaptive Testing for Alphas in High-Dimensional Factor Pricing Models.-- Uniform Nonparametric Inference for Spatially Dependent Panel Data.--A Simple Correction for Misspecification in Trend-Cycle Decompositions with an Application to Estimating r*.--Asset Pricing via the Conditional Quantile Variational Autoencoder.-- Consistent Estimation of Multiple Breakpoints in Dependence Measures.-- Model-Assisted Complier Average Treatment Effect Estimates in Randomized Experiments with Noncompliance.-- A General Framework for Constructing Locally Self-Normalized Multiple-Change-Point Tests.-- Instrumental Variable Estimation of Dynamic Treatment Effects on a Duration Outcome.--An LM Test for the Conditional Independence between Regressors and Factor Loadings in Panel Data Models with Interactive Effects.-- A Design-Based Perspective on Synthetic Control Methods.-- Dynamic Autoregressive Liquidity (DArLiQ).-- Generalized Autoregressive Positive-valued Processes.-- Generalizing the Results from Social Experiments: Theory and Evidence from India.-- Extreme Changes in Changes.-- Large Order-Invariant Bayesian VARs with Stochastic Volatility.
In many business and economics studies, researchers have sought to measure the dynamic dependence of curves with high-dimensional mixed-type predictors. We propose a partially functional autoregressive model (pFAR) where the serial dependence of curves is controlled by coefficient operators that are defined on a two-dimensional surface, and the individual and group effects of mixed-type predictors are estimated with a two-layer regularization. We develop an efficient estimation with the proven asymptotic properties of consistency and sparsity. We show how to choose the sieve and tuning parameters in regularization based on a forward-looking criterion. In addition to the asymptotic properties, numerical validation suggests that the dependence structure is accurately detected. The implementation of the pFAR within a real-world analysis of dependence in German daily natural gas flow curves, with seven lagged curves and 85 scalar predictors, produces superior forecast accuracy and an insightful understanding of the dynamics of natural gas supply and demand for the municipal, industry, and border nodes, respectively.
We propose an asset pricing factor model constructed with semiparametric characteristics-based mispricing and factor loading functions. We approximate the unknown functions by B-splines sieve where the number of B-splines coefficients is diverging. We estimate this model and test the existence of the mispricing function by a power enhanced hypothesis test. The enhanced test solves the low power problem caused by diverging B-splines coefficients, with the strengthened power approaching one asymptotically. We also investigate the structure of mispricing components through Hierarchical K-means Clusterings. We apply our methodology to CRSP (Center for Research in Security Prices) and Compustat data for the U.S. stock market with one-year rolling windows during 1967–2017. This empirical study shows the presence of mispricing functions in certain time blocks. We also find that distinct clusters of the same characteristics lead to similar arbitrage returns, forming a “peer group” of arbitrage characteristics.
Econometricians are increasingly working with high-dimensional networks and their dynamics. Econometricians, however, are often confronted with unforeseen changes in network dynamics. In this article, we develop a method and the corresponding algorithm for monitoring changes in dynamic networks. We characterize two types of changes, edge-initiated and node-initiated, to feature the complexity of networks. The proposed approach accounts for three potential challenges in the analysis of networks. First, networks are high-dimensional objects causing the standard statistical tools to suffer from the curse of dimensionality. Second, any potential changes in social networks are likely driven by a few nodes or edges in the network. Third, in many dynamic network applications such as monitoring network connectedness or its centrality, it will be more practically applicable to detect the change in an online fashion than the offline version. The proposed detection method at each time point projects the entire network onto a low-dimensional vector by taking the sparsity into account, then sequentially detects the change by comparing consecutive estimates of the optimal projection direction. As long as the change is sizeable and persistent, the projected vectors will converge to the optimal one, leading to a jump in the sine angle distance between them. A change is therefore declared. Strong theoretical guarantees on both the false alarm rate and detection delays are derived in a sub-Gaussian setting, even under spatial and temporal dependence in the data stream. Numerical studies and an application to the social media messages network support the effectiveness of our method.
We propose a dynamic network quantile regression model to investigate the quantile connectedness using a predetermined network information. We extend the existing network quantile autoregression model of Zhu et al. by explicitly allowing the contemporaneous network effects and controlling for the common factors across quantiles. To cope with the endogeneity issue due to simultaneous network spillovers, we adopt the instrumental variable quantile regression (IVQR) estimation and derive the consistency and asymptotic normality of the IVQR estimator using the near epoch dependence property of the network process. Via Monte Carlo simulations, we confirm the satisfactory performance of the IVQR estimator across different quantiles under the different network structures. Finally, we demonstrate the usefulness of our proposed approach with an application to the dataset on the stocks traded in NYSE and NASDAQ in 2016.
This article proposes a vector error correction framework for constructing large consistent spillover networks of nonstationary systems grounded in the network theory of Diebold and Y ilmaz. We aim to provide a tailored methodology for the large nonstationary (macro)economic and financial system application settings avoiding technical and often hard to verify assumptions for general statistical high-dimensional approaches where the dimension can also increase with sample size. To achieve this, we propose an elementwise Lasso-type technique for consistent and numerically efficient model selection of VECM, and relate the resulting forecast error variance decomposition to the network topology representation. We also derive the corresponding asymptotic results for model selection and network estimation under standard assumptions. Moreover, we develop a refinement strategy for efficient estimation and show implications and modifications for general dependent innovations. In a comprehensive simulation study, we show convincing finite sample performance of our technique in all cases of moderate and low dimensions. In an application to a system of FX rates, the proposed method leads to novel insights on the connectedness and spillover effects in the FX market among the OECD countries.
Cryptocurrencies return cross-predictability and technological similarity yield information on risk propagation and market segmentation. To investigate these effects, we build a time-varying network for cryptocurrencies, based on the evolution of return cross-predictability and technological similarities. We develop a dynamic covariate-assisted spectral clustering method to consistently estimate the latent community structure of cryptocurrencies network that accounts for both sets of information. We demonstrate that investors can achieve better risk diversification by investing in cryptocurrencies from different communities. A cross-sectional portfolio that implements an inter-crypto momentum trading strategy earns a 1.08% daily return. By dissecting the portfolio returns on behavioral factors, we confirm that our results are not driven by behavioral mechanisms.
In statistical network analysis it is common to observe so called interaction data. Such data is characterized by actors forming the vertices and interacting along edges of the network, where edges are randomly formed and dissolved over the observation horizon. In addition, covariates are observed and the goal is to model the impact of the covariates on the interactions. We distinguish two types of covariates: global, system-wide covariates (i.e., covariates taking the same value for all individuals, such as seasonality) and local, dyadic covariates modeling interactions between two individuals in the network. Existing continuous time network models are extended to allow for comparing a completely parametric model and a model that is parametric only in the local covariates but has a global nonparametric time component. This allows, for instance, to test whether global time dynamics can be explained by simple global covariates like weather, seasonality etc. The procedure is applied to a bike-sharing network by using weather and weekdays as global covariates and distances between the bike stations as local covariates.
In this article, we consider a matrix exponential unbalanced panel data model that allows for (i) spillover effects using matrix exponential terms, (ii) unobserved heterogeneity across entities and time, and (iii) potential heteroscedasticity in the error terms across entities and time. We adopt a likelihood based direct estimation approach in which we jointly estimate the common parameters and fixed effects. To ensure that our estimator has the standard large sample properties, we show how the score functions should be suitably adjusted under both homoscedasticity and heteroscedasticity. We define our suggested estimator as the root of the adjusted score functions, and therefore our approach can be called the M-estimation approach. For inference, we suggest an analytical bias correction approach involving the sample counterpart and plug-in methods to consistently estimate the variance-covariance matrix of the suggested M-estimator. Through an extensive Monte Carlo study, we show that the suggested M-estimator has good finite sample properties. In an empirical application, we use our model to investigate the third country effects on the U.S. outward foreign direct investment (FDI) stock at the industry level.
Systemic risk measures such as CoVaR, CoES, and MES are widely-used in finance, macroeconomics and by regulatory bodies. Despite their importance, we show that they fail to be elicitable and identifiable. This renders forecast comparison and validation, commonly summarized as “backtesting,” impossible. The novel notion of multi-objective elicitability solves this problem by relying on bivariate scores equipped with the lexicographic order. Based on this concept, we propose Diebold–Mariano type tests with suitable bivariate scores to compare systemic risk forecasts. We illustrate the test decisions by an easy-to-apply traffic-light approach. Finally, we apply our traffic-light approach to DAX 30 and S&P 500 returns, and infer some recommendations for regulators.
We develop tests for predictability that are robust to both the magnitude of the initial condition and the degree of persistence of the predictor. While the popular Bonferroni Q test of Campbell and Yogo displays excellent power properties for strongly persistent predictors with an asymptotically negligible initial condition, it can suffer from severe size distortions and power losses when either the initial condition is asymptotically non-negligible or the predictor is weakly persistent. The Bonferroni t test of Elliott, and Stock, although displaying power well below that of the Bonferroni Q test for strongly persistent predictors with an asymptotically negligible initial condition, displays superior size control and power when the initial condition is asymptotically nonnegligible. In the case where the predictor is weakly persistent, a conventional regression t test comparing to standard normal quantiles is known to be asymptotically optimal under Gaussianity. Based on these properties, we propose two asymptotically size controlled hybrid tests that are functions of the Bonferroni Q, Bonferroni t, and conventional t tests. Our proposed hybrid tests exhibit very good power regardless of the magnitude of the initial condition or the persistence degree of the predictor. An empirical application to the data originally analyzed by Campbell and Yogo shows our new hybrid tests are much more likely to find evidence of predictability than the Bonferroni Q test when the initial condition of the predictor is estimated to be large in magnitude.
Even moderate amounts of zero returns in financial data, associated with stale prices, are heavily detrimental for reliable jump inference. We harness staleness-robust estimators to reappraise the statistical features of jumps in financial markets. We find that jumps are much less frequent and much less contributing to price variation than what found by the empirical literature so far. In particular, the empirical finding that volatility is driven by a pure jump process is actually shown to be an artifact due to staleness.
We introduce a time-varying (TV) factor-augmented vector autoregressive (FAVAR) model to capture the TV behavior in the factor loadings and the VAR coefficients. To consistently estimate the TV parameters, we first obtain the unobserved common factors via the local principal component analysis (PCA) and then estimate the TV-FAVAR model via a local smoothing approach. The limiting distribution of the proposed estimators is established. To gauge possible sources of TV features in the FAVAR model, we propose three
L
2
-distance-based test statistics and study their asymptotic properties under the null and local alternatives. Simulation studies demonstrate the excellent finite sample performance of the proposed estimators and tests. In an empirical application to the U.S. macroeconomic dataset, we document overwhelming evidence of structural changes in the FAVAR model and show that the TV-FAVAR model outperforms the conventional time-invariant FAVAR model in predicting certain key macroeconomic series.
This article extends the solution proposed by Aït-Sahalia, Fan, and Li for the leverage effect puzzle, which refers to a fact that empirical correlation between daily asset returns and the changes of daily volatility estimated from high frequency data is nearly zero. Complementing the analysis in Aït-Sahalia, Fan, and Li via the Heston model, we work with a generic semi-nonparametric stochastic volatility model via an operator-based expansion method. Under such a general setup, we identify a new source of bias due to the flexibility of variance dynamics, distinguishing the leverage effect parameter from the instantaneous correlation parameter. For estimating the leverage effect parameter, we show that the main results on analyzing the various sources of biases as well as the resulting statistical procedures for biases correction in Aït-Sahalia, Fan, and Li hold true and are thus indeed theoretically robust. For estimating the instantaneous correlation parameter, we developed a new nonparametric estimation method.
This article proposes a new approach to obtain uniformly valid inference for linear functionals or scalar subvectors of a partially identified parameter defined by linear moment inequalities. The procedure amounts to bootstrapping the value functions of randomly perturbed linear programming problems, and does not require the researcher to grid over the parameter space. The low-level conditions for uniform validity rely on genericity results for linear programs. The unconventional perturbation approach produces a confidence set with a coverage probability of 1 over the identified set, but obtains exact coverage on an outer set, is valid under weak assumptions, and is computationally simple to implement.
How does handwriting legibility affect the performance of algorithms that link individuals across census rounds? We propose a measure of legibility, which we implement at scale for the 1940 U.S. Census, and find strikingly wide variation in enumeration-district-level legibility. Using boundary discontinuities in enumeration districts, we estimate the causal effect of low legibility on the quality of linked samples, measured by linkage rates and share of validated links. Our estimates imply that, across eight linking algorithms, perfect legibility would increase the linkage rate by 5–10 percentage points. Improvements in transcription could substantially increase the quality of linked samples.
We study a continuous treatment effect model in the presence of treatment spillovers through social networks. We assume that one’s outcome is affected not only by his/her own treatment but also by a (weighted) average of his/her neighbors’ treatments, both of which are treated as endogenous variables. Using a control function approach with appropriate instrumental variables, we show that the conditional mean potential outcome can be nonparametrically identified. We also consider a more empirically tractable semiparametric model and develop a three-step estimation procedure for this model. As an empirical illustration, we investigate the causal effect of the regional unemployment rate on the crime rate.
Deep learning has enjoyed tremendous success in a variety of applications but its application to quantile regression remains scarce. A major advantage of the deep learning approach is its flexibility to model complex data in a more parsimonious way than nonparametric smoothing methods. However, while deep learning brought breakthroughs in prediction, it is not well suited for statistical inference due to its black box nature. In this article, we leverage the advantages of deep learning and apply it to quantile regression where the goal is to produce interpretable results and perform statistical inference. We achieve this by adopting a semiparametric approach based on the partially linear quantile regression model, where covariates of primary interest for statistical inference are modeled linearly and all other covariates are modeled nonparametrically by means of a deep neural network. In addition to the new methodology, we provide theoretical justification for the proposed model by establishing the root-n consistency and asymptotically normality of the parametric coefficient estimator and the minimax optimal convergence rate of the neural nonparametric function estimator. Across several simulated and real data examples, the proposed model empirically produces superior estimates and more accurate predictions than various alternative approaches.
In this article, we propose a new bootstrap procedure for the empirical copula process. The procedure involves taking pseudo samples of normalized ranks in the same fashion as the classical bootstrap and applying small perturbations to break ties in the normalized ranks. Our procedure is a simple modification of the usual bootstrap based on sampling with replacement, yet it provides noticeable improvement in the finite sample performance. We also discuss how to incorporate our procedure into the time series framework. Since nonparametric rank statistics can be treated as functionals of the empirical copula, our proposal is useful in approximating the distribution of rank statistics in general. As an empirical illustration, we apply our bootstrap procedure to test the null hypotheses of positive quadrant dependence, tail monotonicity, and stochastic monotonicity, using U.S. Census data on spousal incomes in the past 15 years.
This article develops bootstrap methods for practical statistical inference in panel data quantile regression models with fixed effects. We consider random-weighted bootstrap resampling and formally establish its validity for asymptotic inference. The bootstrap algorithm is simple to implement in practice by using a weighted quantile regression estimation for fixed effects panel data. We provide results under conditions that allow for temporal dependence of observations within individuals, thus, encompassing a large class of possible empirical applications. Monte Carlo simulations provide numerical evidence the proposed bootstrap methods have correct finite sample properties. Finally, we provide an empirical illustration using the environmental Kuznets curve.
This article proposes a new procedure to validate the multi-factor pricing theory by testing the presence of alpha in linear factor pricing models with a large number of assets. Because the market’s inefficient pricing is likely to occur to a small fraction of exceptional assets, we develop a testing procedure that is particularly powerful against sparse signals. Based on the high-dimensional Gaussian approximation theory, we propose a simulation-based approach to approximate the limiting null distribution of the test. Our numerical studies show that the new procedure can deliver a reasonable size and achieve substantial power improvement compared to the existing tests under sparse alternatives, and especially for weak signals.
This article proposes a uniform functional inference method for nonparametric regressions in a panel-data setting that features general unknown forms of spatio-temporal dependence. The method requires a long time span, but does not impose any restriction on the size of the cross section or the strength of spatial correlation. The uniform inference is justified via a new growing-dimensional Gaussian coupling theory for spatio-temporally dependent panels. We apply the method in two empirical settings. One concerns the nonparametric relationship between asset price volatility and trading volume as depicted by the mixture of distribution hypothesis. The other pertains to testing the rationality of survey-based forecasts, in which we document nonparametric evidence for information rigidity among professional forecasters, offering new support for sticky-information and noisy-information models in macroeconomics.
We propose a simple correction for misspecification in trend-cycle decompositions when the stochastic trend is assumed to be a random walk process but the estimated trend displays some serial correlation in first differences. Possible sources of misspecification that would otherwise be hard to detect and correct for include a small amount of measurement error, omitted variables, or minor approximation errors in model dynamics when estimating trend. Our proposed correction is conducted via application of a univariate Beveridge-Nelson decomposition to the preliminary estimated trend and we show with Monte Carlo analysis that our approach can work as well as if the original model used to estimate trend were correctly specified. We demonstrate the empirical relevance of the correction in an application to estimating r* as the trend of a risk-free short-term real interest rate. We find that our corrected estimate of r* is considerably smoother than the preliminary estimate from a multivariate Beveridge-Nelson decomposition based on a vector error correction model, consistent with the presence of at least a small amount of measurement error in some of the variables included in the multivariate model.
We propose a new asset pricing model that is applicable to the big panel of return data. The main idea of this model is to learn the conditional distribution of the return, which is approximated by a step distribution function constructed from conditional quantiles of the return. To study conditional quantiles of the return, we propose a new conditional quantile variational autoencoder (CQVAE) network. The CQVAE network specifies a factor structure for conditional quantiles with latent factors learned from a VAE network and nonlinear factor loadings learned from a “multi-head” network. Under the CQVAE network, we allow the observed covariates such as asset characteristics to guide the structure of latent factors and factor loadings. Furthermore, we provide a two-step estimation procedure for the CQVAE network. Using the learned conditional distribution of return from the CQVAE network, we propose our asset pricing model from the mean of this distribution, and additionally, we use both the mean and variance of this distribution to select portfolios. Finally, we apply our CQVAE asset pricing model to analyze a large 60-year US equity return dataset. Compared with the benchmark conditional autoencoder model, the CQVAE model not only delivers much larger values of out-of-sample total and predictive R2’s, but also earns at least 30.9% higher values of Sharpe ratios for both long-short and long-only portfolios.
This article proposes different methods to consistently detect multiple breaks in copula-based dependence measures. Starting with the classical binary segmentation, also the more recent wild binary segmentation (WBS) is considered. For binary segmentation, consistency of the estimators for the location of the breakpoints as well as the number of breaks is proved, taking filtering effects from AR-GARCH models explicitly into account. Monte Carlo simulations based on a factor copula as well as on a Clayton copula model illustrate the strengths and limitations of the procedures. A real data application on recent Euro Stoxx 50 data reveals some interpretable breaks in the dependence structure.
Noncompliance is a common problem in randomized experiments in various fields. Under certain assumptions, the complier average treatment effect is identifiable and equal to the ratio of the intention-to-treat effects of the potential outcomes to that of the treatment received. To improve the estimation efficiency, we propose three model-assisted estimators for the complier average treatment effect in randomized experiments with a binary outcome. We study their asymptotic properties, compare their efficiencies with that of the Wald estimator, and propose the Neyman-type conservative variance estimators to facilitate valid inferences. Moreover, we extend our methods and theory to estimate the multiplicative complier average treatment effect. Our analysis is randomization-based, allowing the working models to be misspecified. Finally, we conduct simulation studies to illustrate the advantages of the model-assisted methods and apply these analysis methods in a randomized experiment to evaluate the effect of academic services or incentives on academic performance.
We propose a general framework to construct self-normalized multiple-change-point tests with time series data. The only building block is a user-specified single-change-detecting statistic, which covers a large class of popular methods, including the cumulative sum process, outlier-robust rank statistics, and order statistics. The proposed test statistic does not require robust and consistent estimation of nuisance parameters, selection of bandwidth parameters, nor pre-specification of the number of change points. The finite-sample performance shows that the proposed test is size-accurate, robust against misspecification of the alternative hypothesis, and more powerful than existing methods. Case studies of the Shanghai-Hong Kong Stock Connect turnover are provided.
This article considers identification and estimation of the causal effect of the time Z until a subject is treated on a duration T. The time-to-treatment is not randomly assigned, T is randomly right censored by a random variable C, and the time-to-treatment Z is right censored by
T
∧
C
. The endogeneity issue is treated using an instrumental variable explaining Z and independent of the error term of the model. We study identification in a fully nonparametric framework. We show that our specification generates an integral equation, of which the regression function of interest is a solution. We provide identification conditions that rely on this identification equation. We assume that the regression function follows a parametric model for estimation purposes. We propose an estimation procedure and give conditions under which the estimator is asymptotically normal. The estimators exhibit good finite sample properties in simulations. Our methodology is applied to evaluate the effect of the timing of a therapy for burnout.
A huge literature on modeling cross-sectional dependence in panels has been developed using interactive effects (IE). One area of contention is the hypothesis concerned with whether the regressors and factor loadings are correlated or not. Under the null hypothesis that they are conditionally independent, we can still apply the consistent and robust two-way fixed effects estimator. As an important specification test we develop an LM test for both static and dynamic panels with IE. Simulation results confirm the satisfactory performance of the LM test in small samples. We demonstrate its usefulness with an application to a total of 22 datasets, including static panels with a small T and dynamic panels with serially correlated factors, providing convincing evidence that the null hypothesis is not rejected in
Since their introduction by Abadie and Gardeazabal, Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected. While our results only directly apply in settings where treatment is assigned randomly, we believe that they can complement model-based approaches even for observational studies.
We introduce a new class of semiparametric dynamic autoregressive models for the Amihud illiquidity measure, which captures both the long-run trend in the illiquidity series with a nonparametric component and the short-run dynamics with an autoregressive component. We develop a generalized method of moments (GMM) estimator based on conditional moment restrictions and an efficient semiparametric maximum likelihood (ML) estimator based on an iid assumption. We derive large sample properties for our estimators. Finally, we demonstrate the model fitting performance and its empirical relevance on an application. We investigate how the different components of the illiquidity process obtained from our model relate to the stock market risk premium using data on the S&P 500 stock market index.
We introduce generalized autoregressive positive-valued (GARP) processes, a class of autoregressive and moving-average processes that extends the class of existing autoregressive positive-valued (ARP) processes in one important dimension: each conditional moment dynamic is driven by a different and identifiable moving average of the variable of interest. The article provides ergodicity conditions for GARP processes and derives closed-form conditional and unconditional moments. The article also presents estimation and inference methods, illustrated by an application to European option pricing where the daily realized variance follows a GARP dynamic. Our results show that using GARP processes reduces pricing errors by substantially more than using ARP processes.
How informative are treatment effects estimated in one region or time period for another region or time? In this article, I derive bounds on the average treatment effect in a context of interest using experimental evidence from another context. The bounds are based on (a) the information identified about treatment effect heterogeneity due to unobservables in the experiment and (b) using differences in outcome distributions across contexts to learn about differences in distributions of unobservables. Empirically, using data from a pair of remedial education experiments carried out in India, I show the bounds are able to recover average treatment effects in one location using results from the other while the benchmark method cannot.
Policy analysts are often interested in treating the units with extreme outcomes, such as infants with extremely low birth weights. Existing changes-in-changes (CIC) estimators are tailored to middle quantiles and do not work well for such subpopulations. This article proposes a new CIC estimator to accurately estimate treatment effects at extreme quantiles. With its asymptotic normality, we also propose a method of statistical inference, which is simple to implement. Based on simulation studies, we propose to use our extreme CIC estimator for extreme quantiles, while the conventional CIC estimator should be used for intermediate quantiles. Applying the proposed method, we study the effects of income gains from the 1993 EITC reform on infant birth weights for those in the most critical conditions. This article is accompanied by a Stata command.
Many popular specifications for Vector Autoregressions (VARs) with multivariate stochastic volatility are not invariant to the way the variables are ordered due to the use of a lower triangular parameterization of the error covariance matrix. We show that the order invariance problem in existing approaches is likely to become more serious in large VARs. We propose the use of a specification which avoids the use of this lower triangular parameterization. We show that the presence of multivariate stochastic volatility allows for identification of the proposed model and prove that it is invariant to ordering. We develop a Markov chain Monte Carlo algorithm which allows for Bayesian estimation and prediction. In exercises involving artificial and real macroeconomic data, we demonstrate that the choice of variable ordering can have non-negligible effects on empirical results when using the nonorder invariant approach. In a macroeconomic forecasting exercise involving VARs with 20 variables we find that our order-invariant approach leads to the best forecasts and that some choices of variable ordering can lead to poor forecasts using a conventional, non-order invariant, approach.
There are no comments on this title.