1556. Linear quasi-shrinkage estimator for high-dimensional optimization with linear constraints
Invited abstract in session MC-4: Data science meets strongly NP-Hard CO, stream Data Science meets Optimization.
Monday, 12:30-14:00Room: Rupert Beckett LT
Authors (first author is the speaker)
| 1. | Naqi Huang
|
| Delft University of Technology | |
| 2. | Nestor Parolya
|
| Delft University of Technology | |
| 3. | Theresia van Essen
|
| Delft University of Technology |
Abstract
In large-scale, data-driven optimization problems, parameters are often only known approximately due to noisy and small-sized samples. We consider optimization problems with linear constraints where the true parameter matrix is not precisely known, and the number of constraints and number of variables are comparable and both tend to infinity. Our goal is to construct a linear estimator of the true parameter matrix by minimizing the Frobenius distance between the estimated and true parameter matrix. Our method possesses two advantages: 1) all parameters introduced in the linear estimators are consistently estimated, eliminating the need for further calibration, and 2) the constraints of the formulated optimization problem remain linear, ensuring computational efficiency. Simulation shows that our linear estimator consistently produces stable outcomes in terms of the objective value, the ratio of violated constraints and the magnitude of constraint violation across various scenarios, compared to other robust methods. Additionally, it demonstrates resilience against high-level noise, making it a robust choice under uncertainty.
Keywords
- Large Scale Optimization
- Programming, Linear
Status: accepted
Back to the list of papers