2227. Generating Artificial Training Errors for Forecast Combination Based on Shrunk Sample Covariance Matrices
Invited abstract in session WB-6: Predictive Analytics: Forecast Combination & Hyperparameter Optimization, stream Analytics, Data Science, and Forecasting.
Wednesday, 10:45-12:15Room: H9
Authors (first author is the speaker)
| 1. | André Konersmann
|
| Chair for BA and Business Informatics, Catholic University of Eichstätt-Ingolstadt | |
| 2. | Thomas Setzer
|
| Chair of BA and Business Informatics, Catholic University of Eichstätt-Ingolstadt |
Abstract
In the era of data-driven decision-making, accurate and reliable predictions are crucial for future planning and the organizational success of businesses across various sectors. Forecast combination has proven effective in improving predictive performance over individual forecasting models by leveraging the strengths of multiple forecasters and mitigating the potentially high errors of any single forecaster.
However, learning weights for forecast combination based on past error observations is prone to overfitting and high out-of-sample errors when trained on limited data – a common scenario in many forecasting tasks. This is due to the instability of the sample covariance matrix and its inverse in such settings, which are typically used to compute, for example, in-sample optimal weights.
In this study, we introduce an approach to generate additional synthetic error observations. The aim is to enable the learning of more stable weights on these extended training datasets. Artificial data generation techniques have been shown to reduce sensitivity to random patterns and mitigate overfitting in various domains, yet they have received little attention in the context of forecast combination. We develop and evaluate methods to generate multivariate forecast error data from shrunk sample covariance matrices, aiming to reduce overfitting while preserving the generalizable structure of the original data.
In experimental evaluations using Monte Carlo–simulated forecast error data, we assess whether our proposed methods can reduce overfitting and improve out-of-sample accuracy compared to learning weights solely from the original data.
Keywords
- Forecasting algorithms
- Machine Learning
- Robustness
Status: accepted
Back to the list of papers