Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (413)

Search Parameters:
Journal = Econometrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter
Econometrics 2023, 11(3), 18; https://doi.org/10.3390/econometrics11030018 - 31 Jul 2023
Viewed by 293
Abstract
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman [...] Read more.
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman filter is competitive with the two-step SML estimator, which is less efficient than the SML estimator. Regarding empirical results for the S&P 500 index, the quasi-likelihood ratio tests favored the two-factor realized asymmetric stochastic volatility model with the standardized t distribution among alternative specifications, and an analysis on out-of-sample forecasts prefers the realized stochastic volatility models, rejecting the model without the realized volatility measure. Furthermore, the forecasts of alternative RSV models are statistically equivalent for the data covering the global financial crisis. Full article
Article
Socio-Economic and Demographic Factors Associated with COVID-19 Mortality in European Regions: Spatial Econometric Analysis
Econometrics 2023, 11(2), 17; https://doi.org/10.3390/econometrics11020017 - 20 Jun 2023
Viewed by 760
Abstract
In some NUTS 2 (Nomenclature of Territorial Units for Statistics) regions of Europe, the COVID-19 pandemic has triggered an increase in mortality by several dozen percent and only a few percent in others. Based on the data on 189 regions from 19 European [...] Read more.
In some NUTS 2 (Nomenclature of Territorial Units for Statistics) regions of Europe, the COVID-19 pandemic has triggered an increase in mortality by several dozen percent and only a few percent in others. Based on the data on 189 regions from 19 European countries, we identified factors responsible for these differences, both intra- and internationally. Due to the spatial nature of the virus diffusion and to account for unobservable country-level and sub-national characteristics, we used spatial econometric tools to estimate two types of models, explaining (i) the number of cases per 10,000 inhabitants and (ii) the percentage increase in the number of deaths compared to the 2016–2019 average in individual regions (mostly NUTS 2) in 2020. We used two weight matrices simultaneously, accounting for both types of spatial autocorrelation: linked to geographical proximity and adherence to the same country. For the feature selection, we used Bayesian Model Averaging. The number of reported cases is negatively correlated with the share of risk groups in the population (60+ years old, older people reporting chronic lower respiratory disease, and high blood pressure) and the level of society’s belief that the positive health effects of restrictions outweighed the economic losses. Furthermore, it positively correlated with GDP per capita (PPS) and the percentage of people employed in the industry. On the contrary, the mortality (per number of infections) has been limited through high-quality healthcare. Additionally, we noticed that the later the pandemic first hit a region, the lower the death toll there was, even controlling for the number of infections. Full article
(This article belongs to the Special Issue Health Econometrics)
Show Figures

Figure 1

Article
Skill Mismatch, Nepotism, Job Satisfaction, and Young Females in the MENA Region
Econometrics 2023, 11(2), 16; https://doi.org/10.3390/econometrics11020016 - 12 Jun 2023
Viewed by 869
Abstract
Skills utilization is an important factor affecting labor productivity and job satisfaction. This paper examines the effects of skills mismatch, nepotism, and gender discrimination on wages and job satisfaction in MENA workplaces. Gender discrimination implies social costs for firms due to higher turnover [...] Read more.
Skills utilization is an important factor affecting labor productivity and job satisfaction. This paper examines the effects of skills mismatch, nepotism, and gender discrimination on wages and job satisfaction in MENA workplaces. Gender discrimination implies social costs for firms due to higher turnover rates and lower retention levels. Young females suffer disproportionality from this than their male counterparts, resulting in a wider gender gap in the labor market at multiple levels. Therefore, we find that the skill mismatch problem appears to be more significant among specific demographic groups, such as females, immigrants, and ethnic minorities; it is also negatively correlated with job satisfaction and wages. We bridge the literature gap on youth skill mismatch’s main determinants, including nepotism, by showing evidence from some developing countries. Given the implied social costs associated with these practices and their impact on the labor market, we have compiled a list of policy recommendations that the government and relevant stakeholders should take to reduce these problems in the workplace. Therefore, we provide a guide to address MENA’s skill mismatch and improve overall job satisfaction. Full article
Article
Parameter Estimation of the Heston Volatility Model with Jumps in the Asset Prices
Econometrics 2023, 11(2), 15; https://doi.org/10.3390/econometrics11020015 - 02 Jun 2023
Viewed by 692
Abstract
The parametric estimation of stochastic differential equations (SDEs) has been the subject of intense studies already for several decades. The Heston model, for instance, is based on two coupled SDEs and is often used in financial mathematics for the dynamics of asset prices [...] Read more.
The parametric estimation of stochastic differential equations (SDEs) has been the subject of intense studies already for several decades. The Heston model, for instance, is based on two coupled SDEs and is often used in financial mathematics for the dynamics of asset prices and their volatility. Calibrating it to real data would be very useful in many practical scenarios. It is very challenging, however, since the volatility is not directly observable. In this paper, a complete estimation procedure of the Heston model without and with jumps in the asset prices is presented. Bayesian regression combined with the particle filtering method is used as the estimation framework. Within the framework, we propose a novel approach to handle jumps in order to neutralise their negative impact on the estimates of the key parameters of the model. An improvement in the sampling in the particle filtering method is discussed as well. Our analysis is supported by numerical simulations of the Heston model to investigate the performance of the estimators. In addition, a practical follow-along recipe is given to allow finding adequate estimates from any given data. Full article
Show Figures

Figure 1

Article
Factorization of a Spectral Density with Smooth Eigenvalues of a Multidimensional Stationary Time Series
Econometrics 2023, 11(2), 14; https://doi.org/10.3390/econometrics11020014 - 31 May 2023
Viewed by 441
Abstract
The aim of this paper to give a multidimensional version of the classical one-dimensional case of smooth spectral density. A spectral density with smooth eigenvalues and H eigenvectors gives an explicit method to factorize the spectral density and compute the Wold representation [...] Read more.
The aim of this paper to give a multidimensional version of the classical one-dimensional case of smooth spectral density. A spectral density with smooth eigenvalues and H eigenvectors gives an explicit method to factorize the spectral density and compute the Wold representation of a weakly stationary time series. A formula, similar to the Kolmogorov–Szego formula, is given for the covariance matrix of the innovations. These results are important to give the best linear predictions of the time series. The results are applicable when the rank of the process is smaller than the dimension of the process, which occurs frequently in many current applications, including econometrics. Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
Article
Online Hybrid Neural Network for Stock Price Prediction: A Case Study of High-Frequency Stock Trading in the Chinese Market
Econometrics 2023, 11(2), 13; https://doi.org/10.3390/econometrics11020013 - 18 May 2023
Viewed by 855
Abstract
Time-series data, which exhibit a low signal-to-noise ratio, non-stationarity, and non-linearity, are commonly seen in high-frequency stock trading, where the objective is to increase the likelihood of profit by taking advantage of tiny discrepancies in prices and trading on them quickly and in [...] Read more.
Time-series data, which exhibit a low signal-to-noise ratio, non-stationarity, and non-linearity, are commonly seen in high-frequency stock trading, where the objective is to increase the likelihood of profit by taking advantage of tiny discrepancies in prices and trading on them quickly and in huge quantities. For this purpose, it is essential to apply a trading method that is capable of fast and accurate prediction from such time-series data. In this paper, we developed an online time series forecasting method for high-frequency trading (HFT) by integrating three neural network deep learning models, i.e., long short-term memory (LSTM), gated recurrent unit (GRU), and transformer; and we abbreviate the new method to online LGT or O-LGT. The key innovation underlying our method is its efficient storage management, which enables super-fast computing. Specifically, when computing the forecast for the immediate future, we only use the output calculated from the previous trading data (rather than the previous trading data themselves) together with the current trading data. Thus, the computing only involves updating the current data into the process. We evaluated the performance of O-LGT by analyzing high-frequency limit order book (LOB) data from the Chinese market. It shows that, in most cases, our model achieves a similar speed with a much higher accuracy than the conventional fast supervised learning models for HFT. However, with a slight sacrifice in accuracy, O-LGT is approximately 12 to 64 times faster than the existing high-accuracy neural network models for LOB data from the Chinese market. Full article
Show Figures

Figure 1

Article
Local Gaussian Cross-Spectrum Analysis
Econometrics 2023, 11(2), 12; https://doi.org/10.3390/econometrics11020012 - 21 Apr 2023
Viewed by 940
Abstract
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by [...] Read more.
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by the authors of this paper using the local Gaussian auto-spectrum based on the local Gaussian auto-correlations. This makes it possible to detect local structures in univariate time series that look similar to white noise when investigated by the ordinary auto-spectrum. In this paper, the local Gaussian approach is extended to a local Gaussian cross-spectrum for multivariate time series. The local Gaussian cross-spectrum has the desirable property that it coincides with the ordinary cross-spectrum for Gaussian time series, which implies that it can be used to detect non-Gaussian traits in the time series under investigation. In particular, if the ordinary spectrum is flat, then peaks and troughs of the local Gaussian spectrum can indicate nonlinear traits, which potentially might reveal local periodic phenomena that are undetected in an ordinary spectral analysis. Full article
Show Figures

Figure 1

Article
Information-Criterion-Based Lag Length Selection in Vector Autoregressive Approximations for I(2) Processes
Econometrics 2023, 11(2), 11; https://doi.org/10.3390/econometrics11020011 - 20 Apr 2023
Viewed by 643
Abstract
When using vector autoregressive (VAR) models for approximating time series, a key step is the selection of the lag length. Often this is performed using information criteria, even if a theoretical justification is lacking in some cases. For stationary processes, the asymptotic properties [...] Read more.
When using vector autoregressive (VAR) models for approximating time series, a key step is the selection of the lag length. Often this is performed using information criteria, even if a theoretical justification is lacking in some cases. For stationary processes, the asymptotic properties of the corresponding estimators are well documented in great generality in the book Hannan and Deistler (1988). If the data-generating process is not a finite-order VAR, the selected lag length typically tends to infinity as a function of the sample size. For invertible vector autoregressive moving average (VARMA) processes, this typically happens roughly proportional to logT. The same approach for lag length selection is also followed in practice for more general processes, for example, unit root processes. In the I(1) case, the literature suggests that the behavior is analogous to the stationary case. For I(2) processes, no such results are currently known. This note closes this gap, concluding that information-criteria-based lag length selection for I(2) processes indeed shows similar properties to in the stationary case. Full article
Show Figures

Figure 1

Article
Modeling COVID-19 Infection Rates by Regime-Switching Unobserved Components Models
Econometrics 2023, 11(2), 10; https://doi.org/10.3390/econometrics11020010 - 03 Apr 2023
Viewed by 1292
Abstract
The COVID-19 pandemic is characterized by a recurring sequence of peaks and troughs. This article proposes a regime-switching unobserved components (UC) approach to model the trend of COVID-19 infections as a function of this ebb and flow pattern. Estimated regime probabilities indicate the [...] Read more.
The COVID-19 pandemic is characterized by a recurring sequence of peaks and troughs. This article proposes a regime-switching unobserved components (UC) approach to model the trend of COVID-19 infections as a function of this ebb and flow pattern. Estimated regime probabilities indicate the prevalence of either an infection up- or down-turning regime for every day of the observational period. This method provides an intuitive real-time analysis of the state of the pandemic as well as a tool for identifying structural changes ex post. We find that when applied to U.S. data, the model closely tracks regime changes caused by viral mutations, policy interventions, and public behavior. Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
Show Figures

Figure 1

Article
Detecting Common Bubbles in Multivariate Mixed Causal–Noncausal Models
Econometrics 2023, 11(1), 9; https://doi.org/10.3390/econometrics11010009 - 09 Mar 2023
Viewed by 1167
Abstract
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical [...] Read more.
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical tools to detect the common locally explosive dynamics in a Student t-distribution maximum likelihood framework. The performances of both likelihood ratio tests and information criteria were investigated in a Monte Carlo study. Finally, we evaluated the practical value of our approach via an empirical application on three commodity prices. Full article
Show Figures

Figure 1

Article
Semi-Metric Portfolio Optimization: A New Algorithm Reducing Simultaneous Asset Shocks
Econometrics 2023, 11(1), 8; https://doi.org/10.3390/econometrics11010008 - 07 Mar 2023
Cited by 2 | Viewed by 1333
Abstract
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we [...] Read more.
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we apply recently introduced semi-metrics between finite sets to determine the distance between time series’ structural breaks. Then, we build on the classical portfolio optimization theory of Markowitz and use this distance between asset structural breaks for our penalty function, rather than portfolio variance. Our experiments are promising: on synthetic data, we show that our proposed method does indeed diversify among time series with highly similar structural breaks and enjoys advantages over existing metrics between sets. On real data, experiments illustrate that our proposed optimization method performs well relative to nine other commonly used options, producing the second-highest returns, the lowest volatility, and second-lowest drawdown. The main implication for this method in portfolio management is reducing simultaneous asset shocks and potentially sharp associated drawdowns during periods of highly similar structural breaks, such as a market crisis. Our method adds to a considerable literature of portfolio optimization techniques in econometrics and could complement these via portfolio averaging. Full article
Show Figures

Figure 1

Article
Causal Vector Autoregression Enhanced with Covariance and Order Selection
Econometrics 2023, 11(1), 7; https://doi.org/10.3390/econometrics11010007 - 24 Feb 2023
Viewed by 1164
Abstract
A causal vector autoregressive (CVAR) model is introduced for weakly stationary multivariate processes, combining a recursive directed graphical model for the contemporaneous components and a vector autoregressive model longitudinally. Block Cholesky decomposition with varying block sizes is used to solve the model equations [...] Read more.
A causal vector autoregressive (CVAR) model is introduced for weakly stationary multivariate processes, combining a recursive directed graphical model for the contemporaneous components and a vector autoregressive model longitudinally. Block Cholesky decomposition with varying block sizes is used to solve the model equations and estimate the path coefficients along a directed acyclic graph (DAG). If the DAG is decomposable, i.e., the zeros form a reducible zero pattern (RZP) in its adjacency matrix, then covariance selection is applied that assigns zeros to the corresponding path coefficients. Real-life applications are also considered, where for the optimal order p1 of the fitted CVAR(p) model, order selection is performed with various information criteria. Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
Show Figures

Figure 1

Article
Exploring Industry-Distress Effects on Loan Recovery: A Double Machine Learning Approach for Quantiles
Econometrics 2023, 11(1), 6; https://doi.org/10.3390/econometrics11010006 - 14 Feb 2023
Viewed by 1331
Abstract
In this study, we explore the effect of industry distress on recovery rates by using the unconditional quantile regression (UQR). The UQR provides better interpretative and thus policy-relevant information on the predictive effect of the target variable than the conditional quantile regression. To [...] Read more.
In this study, we explore the effect of industry distress on recovery rates by using the unconditional quantile regression (UQR). The UQR provides better interpretative and thus policy-relevant information on the predictive effect of the target variable than the conditional quantile regression. To deal with a broad set of macroeconomic and industry variables, we use the lasso-based double selection to estimate the predictive effects of industry distress and select relevant variables. Our sample consists of 5334 debt and loan instruments in Moody’s Default and Recovery Database from 1990 to 2017. The results show that industry distress decreases recovery rates from 15.80% to 2.94% for the 15th to 55th percentile range and slightly increases the recovery rates in the lower and the upper tails. The UQR provide quantitative measurements to the loss given default during a downturn that the Basel Capital Accord requires. Full article
Show Figures

Figure 1

Article
Building Multivariate Time-Varying Smooth Transition Correlation GARCH Models, with an Application to the Four Largest Australian Banks
Econometrics 2023, 11(1), 5; https://doi.org/10.3390/econometrics11010005 - 06 Feb 2023
Viewed by 1314
Abstract
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of [...] Read more.
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of the systematic, improved modelling cycle required for such nonlinear models. There is an R-package that includes the steps in the modelling cycle. Simulations demonstrate the robustness of the recommended model building approach. The modelling cycle is illustrated using daily return series for Australia’s four largest banks. Full article
Show Figures

Figure 1

Article
Comparing the Conditional Logit Estimates and True Parameters under Preference Heterogeneity: A Simulated Discrete Choice Experiment
Econometrics 2023, 11(1), 4; https://doi.org/10.3390/econometrics11010004 - 25 Jan 2023
Cited by 1 | Viewed by 1433
Abstract
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between [...] Read more.
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between persons due to interpersonal heterogeneity (e.g., brand-name medication, generic medication, no medication). To allow for interpersonal heterogeneity, choice probabilities may be described using logit functions with fixed individual-specific parameters. However, in practice, a study team may ignore heterogeneity in health preferences and estimate a conditional logit (CL) model. In this simulation study, we examine the effects of omitted variance and correlations (i.e., omitted heterogeneity) in logit parameters on the estimation of the coefficients, willingness to pay (WTP), and choice predictions. The simulated DCE results show that CL estimates may have been biased depending on the structure of the heterogeneity that we used in the data generation process. We also found that these biases in the coefficients led to a substantial difference in the true and estimated WTP (i.e., up to 20%). We further found that CL and true choice probabilities were similar to each other (i.e., difference was less than 0.08) regardless of the underlying structure. The results imply that, under preference heterogeneity, CL estimates may differ from their true means, and these differences can have substantive effects on the WTP estimates. More specifically, CL WTP estimates may be underestimated due to interpersonal heterogeneity, and a failure to recognize this bias in HPR indirectly underestimates the value of treatment, substantially reducing quality of care. These findings have important implications in health economics because CL remains widely used in practice. Full article
(This article belongs to the Special Issue Health Econometrics)
Show Figures

Figure 1

Back to TopTop