If you would like to submit an event, please contact us at DSO@LS.KULEUVEN.BE
- This event has passed.
IJCAI 2024 DSO workshop
The workshop co-chairs are:
- Hoong Chuin Lau (Singapore Management University, SG) <hclau@smu.edu.sg >
- Michele Lombardi (University of Bologna, IT) <michele.lombardi2@unibo.it >
- Jayanta Mandi (KU Leuven, BE) <jayanta.mandi@kuleuven.be>
- Yaoxin Wu (TU Eindhoven, NL) <y.wu2@tue.nl>
- Neil Yorke-Smith (TU Delft, NL) <n.yorke-smith@tudelft.nl>
- Yingqian Zhang (TU Eindhoven, NL) <yqzhang@tue.nl> (primary contact)
Data science and optimization are closely related. On the one hand, many problems in data science can be solved using optimizers, and on the other hand optimization problems stated through classical models such as those from mathematical programming cannot be considered independent of historical data. Examples are ample. Methods aimed at high level combinatorial optimization have been shown to strongly profit from configuration. Algorithm selection and tuning tools tend to be built on historical data. Machine Learning (ML) often relies on optimization techniques such as linear or integer programming, and increasingly so for verification and learning optimal decision trees. Metaheuristic approaches that have a learning component are commonplace in mathematical optimization. Black-box optimization makes heavy use of machine learning, and increasingly deep learning is used to predict the output of combinatorial problems (such as vehicle routing and machine scheduling problems) directly. In addition, machine learning models have been embedded into combinatorial optimization to address hard-to-model systems, or for validation of the ML model itself. Furthermore, decision-focused learning and predict+optimize paradigm aim to differentiate over combinatorial optimization problems during training.
The workshop invites submissions that include but are not limited to the following topics:
-
Applying data science and machine learning (ML) methods to solve combinatorial optimization problems: such as algorithm selection based on historical data, speeding up (or driving) the search process using ML including (deep) reinforcement learning, neural combinatorial optimization and handling uncertainties of prediction models for decision-making.
-
Using optimization algorithms for the development of ML models: such as formulating the problem of learning predictive models as mixed integer programming (MIP), constraint programming (CP), or satisfiability (SAT), tuning ML models using search algorithms and meta-heuristics, learning constraint models from empirical data.
-
Embedding/encoding methods: such as combining neural network with combinatorial optimization, model transformations and solver selection, reasoning over ML models, introducing constraints in (hybrid) ML models as well as ‘predict and optimize’.
-
Formal analysis of ML models via optimization or constraint satisfaction techniques: safety checking and verification via SMT or MIP, generation of adversarial examples via similar combinatorial techniques.
-
Computing explanations for ML model via techniques developed for optimization or constraint reasoning systems.
-
Theoretical or empirical researches on generalization and robustness of ML models to improve optimization performance in out-of-distribution and worst-case scenarios.
-
Multiple model learning for ensemble combinatorial optimization with mixed input of images, graphs, programming language.
-
Applications of integrations of techniques of data science and optimization.

