EURO 2024 Copenhagen
Abstract Submission

EURO-Online login

4095. Contextual Stochastic Optimization using Mirror Progressive Hedging

Invited abstract in session WD-27: Machine Learning for and with Mathematical Optimization, stream Mathematical Optimization for XAI.

Wednesday, 14:30-16:00
Room: 047 (building: 208)

Authors (first author is the speaker)

1. Luca Ferrarini
LIPN, Sorbonne Paris Nord
2. Louis Bouvier
CERMICS, Ecole des Ponts ParisTech
3. Axel Parmentier
CERMICS, Ecole des Ponts ParisTech
4. vincent leclere
CERMICS, Ecole des Ponts

Abstract

Prediction and optimization algorithms can be merged to address decision problems involving uncertain parameters linked to known contextual information. Within the framework of Contextual Stochastic Optimization, in this talk we explore the integration of learning models, such as neural networks, with optimization components. Our objective is to learn from a range of policies the most effective one, each parameterized by a neural network. In the specific, our setting is based on hybrid pipelines composed of a machine learning layer where we consider policies encoded by a neural network and concluded by a combinatorial optimization layer. Then, we aim to minimize the expected empirical risk loss on the data training set. To tackle this challenge, we introduce two novel algorithms: the Bregmann Progressive Hedging algorithm, based on a variant of the Progressive Hedging, and the Primal-Dual Mirror Descent algorithm. We present preliminary computational results on the minimum two-stage stochastic spanning tree problem and compare the performance of our learned policy with a well-tailored strategy designed for the problem, but heavily computationally. Our results show that our policy achieves good performance levels while being significantly less computationally demanding.

Keywords

Status: accepted


Back to the list of papers