BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//DSO - EURO Working Group on Data Science meets Optimization - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.euro-online.org/websites/dso
X-WR-CALDESC:Events for DSO - EURO Working Group on Data Science meets Optimization
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20160101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250919
DTEND;VALUE=DATE:20250920
DTSTAMP:20260404T050527
CREATED:20260121T195430Z
LAST-MODIFIED:20260121T195430Z
UID:673-1758240000-1758326399@www.euro-online.org
SUMMARY:The Seventh DSO Workshop at ECML-PKDD 2025
DESCRIPTION:The workshop co-chairs are: \n\nYaoxin Wu (TU Eindhoven\, NL) <y.wu2@tue.nl> (primary contact)\nPatrick De Causmaecker (KU Leuven\, Belgium) <patrick.decausmaecker@kuleuven.be>\nHoong Chuin Lau (Singapore Management University\, SG) <hclau@smu.edu.sg >\nMichele Lombardi (University of Bologna\, IT) <michele.lombardi2@unibo.it >\nJayanta Mandi (KU Leuven\, BE) <jayanta.mandi@kuleuven.be>\nNeil Yorke-Smith (TU Delft\, NL) <n.yorke-smith@tudelft.nl>\nYingqian Zhang (TU Eindhoven\, NL) <yqzhang@tue.nl> 
URL:https://www.euro-online.org/websites/dso/event/the-seventh-dso-workshop-at-ecml-pkdd-2025/
LOCATION:ECML-PKDD 2025\, Porto\, Portugal
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250825
DTEND;VALUE=DATE:20250830
DTSTAMP:20260404T050527
CREATED:20260120T220043Z
LAST-MODIFIED:20260120T220209Z
UID:664-1756080000-1756511999@www.euro-online.org
SUMMARY:Second EURO PhD School Data Science Meets Combinatorial Optimisation
DESCRIPTION:The Second EURO PhD School Data Science Meets Combinatorial Optimisation took place at Eindhoven University of Technology\, in september 2025 \nLecturers \n\nProf. Carola Doerr Sorbonne University\, France Black-Box Optimization\nProf. Kevin Tierney Bielefeld University\, Germany Deep Reinforcement Learning for Vehicle Routing Problems\nProf. Kate Smith-Miles University of Melbourne\, Australia Instance Space Analysis\nYingqian Zhang\, Sicco Verver Eindhoven University of Technology\, Delft University of Technology Optimization in Machine Learning\nKate Smith-Miles University of Melbourne Instance Space Analysis\nProf. Pieter Smet KU Leuven\, Belgium Uncertainty in Optimization\nProf. Sicco Verwer Delft University of Technology\, Netherlands Learning Optimal\, Robust Decision Trees\nSymposium Day with\n\n\nProf. Ilker Birbil\, University of Amsterdam\nProf. Zaharah Bukhsh TU Eindhoven\nProf. Kate Smith-Miles University of Melbourne\nProf. Michael Römer Bielefeld University\nProf. Neil Yorke-Smith TU Delft\nProf. Yaoxin Wu TU Eindhoven\nDr. Pavel Troubil Dassault Systemes\nCynthia Luijkx ORTEC\n\n\n\nOrganizers \n\nYingqian Zhang Eindhoven University of Technology Scientific chair & organizer\, Executive committee member EWG/DSO\nYi-Ming Yong\, Igor Smit\, Xia Jiang\, Eindhoven University of Technology\, Local organizers\nPatrick de Causmaecker KU Leuven Scientific chair / coordinator EWG/DSO\n\nAlgorithms for combinatorial optimization feature aspects of data science in various respects. Combinatorial optimization problems (COPs) are mostly NP-hard\, and this complexity reflects itself in complicated  and large solution spaces. Combinatorial optimization problems often originate from real-world problems and this real-world context has an impact on the set of instances likely to ask for a solution which influences  the applicability of specific algorithms. NP-hard problems often allow fast solutions for classes of instances while other classes are much harder to solve. Mapping these classes onto a space of instances provides insight and increases understanding in the problem as well as on the applicability of specific algorithms. In recent years\, many advanced machine learning (ML) techniques have been developed to solve combinatorial optimization problems (COPs) directly or to aid algorithms in reaching good solutions more quickly. In addition\, optimization techniques have been used to help build more transparent and fair machine learning models. This school will focus on topics and techniques that leverage data\, machine learning\, and optimization methods\, taught by experts from OR and ML. \n\n\n\nOn each day of the PhD school\, one lecturer\, often assisted by a post-doc\, will teach a state-of-the art topic\, providing both theory and hands-on training and exercises. In addition to those teaching sessions\, PhD students will get the opportunity to present and discuss their work. Moreover\, there will be several invited talks on new research of machine learning and optimization. Last but not least\, during joint meals and social activities\, there will be plenty of room for socializing and networking. In addition\, a symposium on “AI meets Optimization” will be held during the school. 
URL:https://www.euro-online.org/websites/dso/event/2nd-euro-phd-school-data-science-meets-combinatorial-optimisation-2/
LOCATION:Eindhoven University of Technology\, Eindhoven\, Netherlands
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250622
DTEND;VALUE=DATE:20250628
DTSTAMP:20260404T050527
CREATED:20260121T214839Z
LAST-MODIFIED:20260121T214948Z
UID:689-1750550400-1751068799@www.euro-online.org
SUMMARY:Stream on Data Science meets Optimization at EURO 2025
DESCRIPTION:The Stream at EURO 2025 welcomed 14 sessions: \nMA-38 Automating the Design\, Generation and Control of Optimization Algorithms 1\nchairs Andrew J. Parkes\, Ender Özcan\nMB-4 Interpretable Optimization Methods and Applications \nchairs Patrick De Causmaecker\, Sureyya Ozogur-Akyuz\nMB-38 Optimization in contexts with multi-media signals or data security\nchairs Dimitri Papadimitriou\, CHIEN CHU LU\nMC-4 Data science meets strongly NP-Hard CO\nchairs Dimitri Papadimitriou\, Woo Seok Goh\nMC-38 Automating the Design\, Generation and Control of Optimization Algorithms 2\nchairs Andrew J. Parkes\, Ender Özcan\nMD-38 (Deep) Reinforcement Learning for Combinatorial Optimization\nchairs Kevin Tierney\, Yingqian Zhang\, Yaoxin Wu\nTA-38 Forecasting\, prediction and optimization 1\nchair Vittorio Maniezzo\nTB-38 Forecasting\, prediction and optimization 2\nchairs Ahmed Kheiri\, Sven Weinzierl\nTC-38 Forecasting\, prediction and optimization 3\nchair Bahman Rostami-Tabar\nTD-38 Foundation Models and Optimization\nchairs Peer-Olaf Siebers\, Yoon SUA\nWA-38 Industrial Optimization\nchair Grzegorz Pawlak\nWB-38 Optimization and Machine Learning: Methodological Advances\nchair Michael Römer\, Ender Özcan\nWC-38 Optimization in Online Environments\nchair Grzegorz Pawlak\nWD-38 Privacy-Aware and Optimization-Driven AI Systems\nchairs Sureyya Ozogur-Akyuz\, Polat Goktas \n  \n 
URL:https://www.euro-online.org/websites/dso/event/stream-on-data-science-meets-optimization-at-euro-2025/
LOCATION:EURO 2025\, Leeds\, United Kingdom
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240804
DTEND;VALUE=DATE:20240805
DTSTAMP:20260404T050527
CREATED:20260120T221325Z
LAST-MODIFIED:20260121T194933Z
UID:670-1722729600-1722815999@www.euro-online.org
SUMMARY:IJCAI 2024 DSO workshop
DESCRIPTION:The workshop co-chairs are: \n\nHoong Chuin Lau (Singapore Management University\, SG) <hclau@smu.edu.sg >\nMichele Lombardi (University of Bologna\, IT) <michele.lombardi2@unibo.it >\nJayanta Mandi (KU Leuven\, BE) <jayanta.mandi@kuleuven.be>\nYaoxin Wu (TU Eindhoven\, NL) <y.wu2@tue.nl>\nNeil Yorke-Smith (TU Delft\, NL) <n.yorke-smith@tudelft.nl>\nYingqian Zhang (TU Eindhoven\, NL) <yqzhang@tue.nl> (primary contact)\n\nData science and optimization are closely related. On the one hand\, many problems in data science can be solved using optimizers\, and on the other hand optimization problems stated through classical models such as those from mathematical programming cannot be considered independent of historical data. Examples are ample. Methods aimed at high level combinatorial optimization have been shown to strongly profit from configuration. Algorithm selection and tuning tools tend to be built on historical data. Machine Learning (ML) often relies on optimization techniques such as linear or integer programming\, and increasingly so for verification and learning optimal decision trees. Metaheuristic approaches that have a learning component are commonplace in mathematical optimization. Black-box optimization makes heavy use of machine learning\, and increasingly deep learning is used to predict the output of combinatorial problems (such as vehicle routing and machine scheduling problems) directly. In addition\, machine learning models have been embedded into combinatorial optimization to address hard-to-model systems\, or for validation of the ML model itself. Furthermore\, decision-focused learning and predict+optimize paradigm aim to differentiate over combinatorial optimization problems during training. \nThe workshop invites submissions that include but are not limited to the following topics: \n\n\nApplying data science and machine learning (ML) methods to solve combinatorial optimization problems: such as algorithm selection based on historical data\, speeding up (or driving) the search process using ML including (deep) reinforcement learning\, neural combinatorial optimization and handling uncertainties of prediction models for decision-making. \n\n\nUsing optimization algorithms for the development of ML models: such as formulating the problem of learning predictive models as mixed integer programming (MIP)\, constraint programming (CP)\, or satisfiability (SAT)\, tuning ML models using search algorithms and meta-heuristics\, learning constraint models from empirical data. \n\n\nEmbedding/encoding methods: such as combining neural network with combinatorial optimization\, model transformations and solver selection\, reasoning over ML models\, introducing constraints in (hybrid) ML models as well as ‘predict and optimize’. \n\n\nFormal analysis of ML models via optimization or constraint satisfaction techniques: safety checking and verification via SMT or MIP\, generation of adversarial examples via similar combinatorial techniques. \n\n\nComputing explanations for ML model via techniques developed for optimization or constraint reasoning systems. \n\n\nTheoretical or empirical researches on generalization and robustness of ML models to improve optimization performance in out-of-distribution and worst-case scenarios. \n\n\nMultiple model learning for ensemble combinatorial optimization with mixed input of images\, graphs\, programming language. \n\n\nApplications of integrations of techniques of data science and optimization.
URL:https://www.euro-online.org/websites/dso/event/ijcai-2024-dso-workshop/
LOCATION:IJCAI 2024\, Jeju\, Korea\, Republic of
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240630
DTEND;VALUE=DATE:20240704
DTSTAMP:20260404T050527
CREATED:20260121T212629Z
LAST-MODIFIED:20260121T212629Z
UID:686-1719705600-1720051199@www.euro-online.org
SUMMARY:Stream on Data Science meets Optimization at EURO 2024
DESCRIPTION:The stream at EURO 2024 welcomed 10 sessions: \nMA-3 Industrial Optimization chair Grzegorz Pawlak\nMB-3 Optimization in Online Environments chair Grzegorz Pawlak\nMC-3 (Deep) Reinforcement Learning for Combinatorial Optimization 1 \nMD-3 (Deep) Reinforcement Learning for Combinatorial Optimization 2\nTA-3 (Deep) Reinforcement Learning for Combinatorial Optimization 3 \nchairs for MC-3\, MD-3 and TA-3: Kevin Tierney\, Yingqian Zhang \nTB-3 Machine Learning in Applied Optimization \nTC-3 Optimization and Machine Learning: Methodological Advances \nchair for TB-3 and TC-3: Michael Römer \nTD-3 Data science meets strongly NP-Hard CO chair Dimitri Papadimitriou\nWA-3 Data Science and Optimization chairs Andrew J. Parkes\, Ender Özcan\, Chang Liu\nWB-3 Interpretable Optimization Methods and Applications chair Patrick De Causmaecker \n 
URL:https://www.euro-online.org/websites/dso/event/stream-on-data-science-meets-optimization-at-euro-2024/
LOCATION:EURO 2024\, Lyngby\, Denmark
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230904
DTEND;VALUE=DATE:20230909
DTSTAMP:20260404T050527
CREATED:20260120T214051Z
LAST-MODIFIED:20260120T220150Z
UID:657-1693785600-1694217599@www.euro-online.org
SUMMARY:First Euro PhD School Data Science Meets Combinatorial Optimisation
DESCRIPTION:The first EURO PhD school on Data Science meets Optimization took place at Bielefeld University in Germany beginning of september 2023 \nLecturers \n\nKevin Tierney Bielefeld University Deep Reinforcement Learning for Vehicle Routing Problems\nMarius Lindauer\, Alexander Tornede Leibniz University Hannover Efficient algorithm design via automated algorithm selection and configuration\nYingqian Zhang\, Sicco Verver Eindhoven University of Technology\, Delft University of Technology Optimization in Machine Learning\nKate Smith-Miles University of Melbourne Instance Space Analysis\nDimitri Papadimitriou University of Antwerp A third dimension for characterising algorithms: spatial properties\nInvited Keynote: Yaochu Jin Alexander von Humboldt Professor for Artificial Intelligence Bielefeld University Graph Neural Networks for Combinatorial Optimization\n\nOrganizers \n\nMichael Römer Bielefeld University Organization\nAndrew Parkes\, Ender Özcan University of Nottingham Coordinator EWG/DSO\nPatrick de Causmaecker KU Leuven Scientific chair / coordinator EWG/DSO\n\nAlgorithms for combinatorial optimization feature aspects of data science in various respects. Combinatorial optimization problems are mostly NP-hard\, and this complexity reflects itself in complicated  and large solution spaces. Combinatorial optimization problems often originate from real-world problems and this real-world context has an impact on the set of instances likely to ask for a solution which influences  the applicability of specific algorithms. NP-hard problems often allow fast solutions for classes of instances while other classes are much harder to solve. Mapping these classes onto a space of instances provides insight and increases understanding in the problem as well as on the applicability of specific algorithms. \nOn each day of the PhD school\, one lecturer\, often assisted by a post-doc\, will teach a state-of-the art topic\, providing both theory and hands-on training and excercises. In addition to those teaching sessions\, PhD students will get the opportunity to present and discuss their work\, and there will be an invited talk by the Alexander von Humboldt Professor Yachou Jin. Last but not least\, during joint meals and social activities\, there will be plenty of room for socializing and networking.
URL:https://www.euro-online.org/websites/dso/event/euro-phd-school-data-science-meets-combinatorial-optimisation/
LOCATION:Bielefeld University\, Bielefeld\, Germany
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230710
DTEND;VALUE=DATE:20230715
DTSTAMP:20260404T050527
CREATED:20260121T211108Z
LAST-MODIFIED:20260121T215147Z
UID:684-1688947200-1689379199@www.euro-online.org
SUMMARY:Stream on Data Science meets Optimization at IFORS 2023
DESCRIPTION:The Stream at IFORS 2023 welcomed three sessions: \nDecision Support Tools for Astronomical Observatory Management\, chair Rodrigo A. Carrasco \nInductive Optimization and Methods to Incoporate Data Properties\, chairs Dimitri Papadimitriou\, Claudio Sole \nMachine Learning\,Data Analysis and Combinatorial Optimization\, chair Jorik Jooken \n  \n 
URL:https://www.euro-online.org/websites/dso/event/stream-on-data-science-meets-optimization-at-ifors-2023/
LOCATION:IFORS 2023\, Santiago\, Chile
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20220703
DTEND;VALUE=DATE:20230104
DTSTAMP:20260404T050527
CREATED:20260121T205830Z
LAST-MODIFIED:20260121T205830Z
UID:681-1656806400-1672790399@www.euro-online.org
SUMMARY:Stream Data Science meets Optimization at EURO 2022
DESCRIPTION:The stream at EURO 2022 welcomed five sessions \nTA-5 Data Science Meets Optimization Andrew J. Parkes Ender Özcan\nTB-5 Integrating Machine Learning in Optimization Methods Michael Römer\nTC-5 Optimization Models for Machine Learning Dimitri Papadimitriou\nTD-5 Better Decisions with Data Patrick De Causmaecker\nWA-5 Automated algorithm tuning\, configuration and construction Manuel López-Ibáñez \n 
URL:https://www.euro-online.org/websites/dso/event/stream-data-science-meets-optimization-at-euro-2022/
LOCATION:EURO 2022\, ESPOO\, Finland
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20220523
DTEND;VALUE=DATE:20220530
DTSTAMP:20260404T050527
CREATED:20260120T212534Z
LAST-MODIFIED:20260120T212534Z
UID:655-1653264000-1653868799@www.euro-online.org
SUMMARY:The fifth DSO Workshop at IJCAI 2022
DESCRIPTION:Keynote speaker: Diederik M. Roijers (HU University of Applied Science Utrecht & Vrije Universiteit Brussel) “On the necessity of using multiple objectives in future AI”\nThe workshop co-chairs are: \n\nTias Guns (KU Leuven\, BE) <tias.guns@kuleuven.be>\nMichele Lombardi (University of Bologna\, IT) <michele.lombardi2@unibo.it>\nNeil Yorke-Smith (TU Delft\, NL) <n.yorke-smith@tudelft.nl>\nYingqian Zhang (TU Eindhoven\, NL) <yqzhang@tue.nl>\n\nThe aim of the workshop is to organize an open discussion and exchange of ideas by researchers from data science\, constraint optimization and operations research in order to identify how techniques from these fields can benefit each other. The workshop invites submissions that include but are not limited to the following topics: \n\n\nApplying data science and machine learning methods to solve combinatorial optimization problems\, such as algorithm selection based on historical data\, speeding up or driving the search process using machine learning including (deep) reinforcement learning\, neural combinatorial optimization\, and handling uncertainties of prediction models for decision-making. \n\n\nUsing optimization algorithms for the development of machine learning models: such as formulating the problem of learning predictive models as MIP\, constraint programming or boolean satisfiability (SAT). Tuning machine learning models using search algorithms and meta-heuristics. Learning constraint models from empirical data. \n\n\nEmbedding/encoding methods: combining machine learning with combinatorial optimization\, model transformations and solver selection\, reasoning over machine learning models. Introducing constraints in (hybrid) machine learning models as well as ‘predict and optimize’ frameworks. \n\n\nFormal analysis of machine learning models via optimization or constraint satisfaction techniques: safety checking and verification via SMT or MIP\, generation of adversarial examples via similar combinatorial techniques. \n\n\nComputing explanations for ML model via techniques developed for optimization or constraint reasoning systems. \n\n\nApplications of integrations of techniques of data science and optimization.
URL:https://www.euro-online.org/websites/dso/event/the-fifth-dso-workshop-at-ijcai-2022/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20210819
DTEND;VALUE=DATE:20210821
DTSTAMP:20260404T050527
CREATED:20260120T211530Z
LAST-MODIFIED:20260120T211738Z
UID:650-1629331200-1629503999@www.euro-online.org
SUMMARY:DSO Workshop at IJCAI 2021
DESCRIPTION:Plenary speakers \n\nPaul Grigas (Assistant Professor\, University of California\, Berkeley)\nPatrick Henne (CTO\, ORTEC)\n\nThe workshop co-chairs are: \n\nPatrick De Causmaecker (KU Leuven\, BE)\nTias Guns (Vrije Universiteit Brussel\, BE)\nMichele Lombardi (University of Bologna\, IT)\nYingqian Zhang (TU Eindhoven\, NL)\n\nData science and optimization are closely related. On the one hand\, many problems in data science can be solved using optimizers\, on the other hand optimization problems stated through classical models such as those from mathematical programming cannot be considered independent of historical data. Examples are ample: Machine Learning (ML) often relies on optimization techniques such as linear or integer programming; reasoning systems have been applied to constrained pattern and sequence mining tasks; a parallel development of metaheuristic approaches has taken place in the domains of data mining and machine learning; methods aimed at high level combinatorial optimization have been shown to strongly profit from configuration\, algorithm selection and tuning tools building on historical data; ML models can be embedded in combinatorial optimization problems to address hard-to-model systems\, or for validation of the ML model itself; “predict\, then optimize” scenarios can be dealt with in an integrated fashion to improve considerably the solution quality.
URL:https://www.euro-online.org/websites/dso/event/dso-workshop-at-ijcai-2021/
LOCATION:IJCAI 2021\, Montreal\, Canada
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20210711
DTEND;VALUE=DATE:20210715
DTSTAMP:20260404T050527
CREATED:20260121T204016Z
LAST-MODIFIED:20260121T204522Z
UID:678-1625961600-1626307199@www.euro-online.org
SUMMARY:Stream Data Science meets Optimisation at EURO 2021
DESCRIPTION:All streams at EURO 2021 were hybrid. \nSessions: \nTD-55  Data Science and Optimization Patrick De Causmaecker\, Ender Özcan\, Daniel Karapetyan\nTE-55 Optimization Models for Machine Learning Dimitri Papadimitriou\nTF-55 Better Decisions with Data II Yingqian Zhang\nWA-55 Data-driven decisions in OR Hatice Calik\, Victor Bucarey\nWB-55 Integrating machine learning in optimization methods I Kevin Tierney\nWC-55 Better Decisions with Data I Yingqian Zhang\nWD-55 Integrating machine learning in optimization methods II Kevin Tierney \n 
URL:https://www.euro-online.org/websites/dso/event/stream-data-science-meets-optimisation-at-euro-2021/
LOCATION:Euro 2021\, Athens\, Greece
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20210107
DTEND;VALUE=DATE:20210109
DTSTAMP:20260404T050527
CREATED:20260120T210434Z
LAST-MODIFIED:20260120T211628Z
UID:645-1609977600-1610150399@www.euro-online.org
SUMMARY:IJCAI 2020 DSO Workshop
DESCRIPTION:The workshop co-chairs are: \n\nPatrick De Causmaecker (KU Leuven\, BE)\nTias Guns (Vrije Universiteit Brussel\, BE\nMichele Lombardi (University of Bologna\, IT)\nYingqian Zhang (TU Eindhoven\, NL)\n\nData science and optimization are closely related. On the one hand\, many problems in data science can be solved using optimizers\, on the other hand optimization problems stated through classical models such as those from mathematical programming cannot be considered independent of historical data. Examples are ample: Machine Learning (ML) often relies on optimization techniques such as linear or integer programming; reasoning systems have been applied to constrained pattern and sequence mining tasks; a parallel development of metaheuristic approaches has taken place in the domains of data mining and machine learning; methods aimed at high level combinatorial optimization have been shown to strongly profit from configuration\, algorithm selection and tuning tools building on historical data; ML models can be embedded in combinatorial optimization problems to address hard-to-model systems\, or for validation of the ML model itself; “predict\, then optimize” scenarios can be dealt with in an integrated fashion to improve considerably the solution quality.
URL:https://www.euro-online.org/websites/dso/event/ijcai-2020-dso-workshop/
LOCATION:IJCAI 2020\, Yokohama\, Japan
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20200621
DTEND;VALUE=DATE:20200627
DTSTAMP:20260404T050527
CREATED:20191118T084515Z
LAST-MODIFIED:20191220T134739Z
UID:471-1592697600-1593215999@www.euro-online.org
SUMMARY:DSO stream Data Science meets Optimisation @ IFORS2020
DESCRIPTION:Building on the streams at IFORS 2017 in Quebec\, EURO 2018 in Valencia\, Euro 2019 in Dublin\, it is our pleasure to announce the stream at IFORS 2020 in Seoul. See the sessions here and feel free to use the codes when submitting your abstract\n 
URL:https://www.euro-online.org/websites/dso/event/dso-stream-data-science-meets-optimisation-ifors2020/
CATEGORIES:Conferences
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20200310
DTEND;VALUE=DATE:20200312
DTSTAMP:20260404T050527
CREATED:20200214T144626Z
LAST-MODIFIED:20200310T090357Z
UID:495-1583798400-1583971199@www.euro-online.org
SUMMARY:Symposium in Artificial Intelligence\, Data Analytics and Optimization and PhD defense of Tu San Pham
DESCRIPTION:Location: room 00.21A\, IICK building\, KU Leuven\, campus Kortrijk\, Etienne Sabbelaan 53\, 8500 Kortrijk \nDate: 10th and 11th\, March\, 2020 \nWe are pleased to announce the symposium organized at KU Leuven\, campus Kortrijk on the 10th and 11th of March.  \nWe will have six speakers along with a poster session with the participants from different groups from Kortrijk\, Ghent\, Lille\, Rotterdam and Bielefeld\, followed by the PhD defense of Tu San Pham. \nPlease help us to organize the event better by informing us about your attendant using this link registration \nThe detailed program: \n\n\n\nTime\nProgram\n\n\nTuesday 10/3\n\n\n14:00-14:45\nKevin Tierney\, Neural Large Neighborhood Search for Vehicle Routing Problems\n\n\n14:45 – 15:30\nMichael Römer\, Modeling Multiactivity Shift Scheduling Problems with State-Expanded Networks\n\n\n15:30-16:30\nCoffee break and poster session\n\n\n16:30 – 17:15\nDimitri Papadimitriou\,Machine Learning methods meeting Data Assimilation\n\n\n17:15-18:00\nIlker Birbil\, Data Privacy in Bid-Price Control for Network Revenue Management\n\n\nWednesday 11/3\n\n\n09:00-09:45\nLaetitia Jourdan\, Multi-objective optimization for knowledge discovery in big data\n\n\n09:45 – 10:30\nLouis Martin Rousseau\, Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning\n\n\n10:30 – 11:00\nCoffee break\n\n\n11:00 – 14:00\nPhD defense of Tu San Pham\, Formal\, exact and metaheuristic methods for combinatorial optimization\n\n\n\n  \nThe list of speakers along with the abstracts of their talks can be found below. \n1. Kevin Tierney\, Universität Bielefeld\, Neural Large Neighborhood Search for Vehicle Routing Problems \nLearning how to automatically solve optimization problems has the potential to provide the next big leap in optimization technology. The performance of automatically learned heuristics on routing problems has been steadily improving in recent years\, but approaches based purely on machine learning are still outperformed by state-of-the-art optimization methods. To close this performance gap\, we propose a novel large neighborhood search (LNS) framework for vehicle routing that integrates learned heuristics for generating new solutions. The learning mechanism is based on a deep neural network with an attention mechanism and has been especially designed to be integrated into an LNS search setting. We evaluate our approach on the capacitated vehicle routing problem (CVRP) and the split delivery vehicle routing problem (SDVRP). On CVRP instances with up to 297 customers our approach significantly outperforms an LNS that uses only handcrafted heuristics and a well-known heuristic from the literature. Furthermore\, we show for the CVRP and the SDVRP that our approach surpasses the performance of existing machine learning approaches and comes close to the performance of state-of-the-art optimization approaches. \n  \n2. Michael Römer\, Universität Bielefeld\,  Modeling Multiactivity Shift Scheduling Problems with State-Expanded Networks \nIn this talk\, we propose a new MILP formulation for multi-activity shift scheduling problems based on aggregated flows in state-expanded networks. We discuss the relation of the new formulation to other formulations relying on graphical optimizations models based on formal languages such as context-free grammars or deterministic finite automata. In addition\, we present computational results with well-known instances showing that the novel formulation yields both smaller MILP models and in most cases faster solution times than the other model types\, including implicit grammar-based models. \n\n3. Dimitri Papadimitriou\, University of Antwerp\, Machine Learning methods meeting Data Assimilation \nData assimilation is the process of combining time ordered observation data with numerical model to i) produce accurate/optimal characterization of current model state (state modeling) and ii) predict observations given a model state and the temporal evolution in time of model states (state prediction). This method finds a wide spectrum of applicability from geophysics/climatology to biophysics to obtain well initialized short-term numerical forecasts combined with observation analysis. In this context\, sequential data assimilation aims at finding at every assimilation step an analysis/model state x_a that explains the observation y. The handling of such (nonlinear inverse) problem involves the solving of the (ill-posed) nonlinear operator equation y = H(x_a)\, where H is the nonlinear time-invariant observation operator (mapping from state into observation space). To provide stable approximates to such ill-posed nonlinear operator equation and avoid numerical instability when inverting ill-conditioned matrix\, this paper develops a regularization method based on penalization by total variation. Motivated by the nonlinearity of the problem\, an iterative method (to total variation) is combined with nonlinearity approximation for solving such nonlinear inverse problems. It then compares it to neural-learning based method; thus\, instead of involving physics-based method for the solving of the inverse problem\, a direct method is considered that approximates the inverse nonlinear time-invariant observation operator $H^{-1}$. Computational and convergence analysis in presence of noisy data and numerical results involving advection-diffusion-reaction phenomena are then presented. \n  \n4.  Ilker Birbil\, Erasmus University\, Rotterdam\, Data Privacy in Bid-Price Control for Network Revenue Management \nWe present a network revenue management problem where multiple parties agree to share some of the capacities of the network. This collaboration is performed by constructing a large mathematical programming model available to all parties. The parties then use the solution of this model in their own bid-price control systems. In this setting\, the major concern for the parties is the privacy of their input data and the optimal solutions containing their individual decisions. To address this concern\, we propose an approach based on solving an alternative data-private model constructed with input masking and random transformations. Our main result shows that each party can safely recover only its own optimal decisions after the same data-private model is solved by each party. We also discuss several special cases where possible privacy leakage would require attention. Observing that the dense data-private model may take more time to solve than the sparse original non-private model\, we further propose a modeling approach that introduces sparsity into the data-private model. We support our results with a simulation study where we use a real-world network structure. The talk ends with a discussion on a decomposition approach that we have recently started to work on. \n  \n5. Laetitia Jourdan\, Université de Lille\, Multi-objective optimization for knowledge discovery in big data \nClassical knowledge discovery tasks\, such that classification\, feature selection\, association rules mining may be seen as multi-objective combinatorial optimization problems. Indeed\, in many cases\, some elements have to be combined to produce the solution that may be evaluated thanks to several quality criteria (it is usually necessary to maximize the specificity of the extracted knowledge while maximizing its generality to be applicable). \nHence efficient multi-objective optimization techniques may contribute to extract interesting knowledge from datasets. In a context of big data\, some additional specificities have to be taken into account\, and metaheuristics are well suited to address them. \nIn this presentation\, I will focus on how knowledge discovery tasks may be modelled as multi-objective optimization problems and give some insight on how to solve them. I will also focus on the use of optimisation and multi-objective optimisation to realise the knowledge discovery pipeline (MO-AutoML).  \n  \n6. Louis Martin Rousseau\, Polytechnique de Montréal\, Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning \nPrescriptive analytics\, which has optimization at its core\, provides organizations with scalable software for large-scale automated decision-making.  Combinatorial optimization algorithms rely heavily on generic methods for identifying tight bonds\, which provide both solutions to problems and optimality guarantees. One broad class of algorithms is dynamic programming (DP)\, which often leverages approximate dynamic programming (ADP) to cope with the well-known “curse of dimensionality” and to provide objective function bounds. This paper studies how machine learning (ML)\, and more specifically deep reinforcement learning (DRL)\, can be used to improve bounds provided by ADP models\, in particular through learning variable ordering for decision diagrams which represent the ADP. The DRL models introduced lead to improved primal and dual bounds\, even over linear programming relaxation. The contributions of this paper are (1) a novel and generic mechanism for utilizing ML to obtain high-quality heuristic solutions and\, (2) one of the first applications of ML to improve DP relaxation bounds in a generic fashion.  We apply the methods to classic optimization problems\, and exhibit through computational testing that optimization bounds can be significantly improved through DRL.
URL:https://www.euro-online.org/websites/dso/event/symposium-in-artificial-intelligence-data-analytics-and-optimization/
LOCATION:Kortrijk\, Etienne Sabbelaan 53\, Kortrijk\, Belgium
CATEGORIES:Other events
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190811
DTEND;VALUE=DATE:20190812
DTSTAMP:20260404T050527
CREATED:20260120T205520Z
LAST-MODIFIED:20260120T205520Z
UID:642-1565481600-1565567999@www.euro-online.org
SUMMARY:IJCAI 2019 DSO Workshop
DESCRIPTION:Data science and optimisation are closely related. On the one hand\, many problems in data science can be solved using optimisers\, on the other hand optimisation problems stated through classical models such as those from Mathematical Programming cannot be considered independent of historical data. Examples are ample. Machine learning often relies on optimisation techniques such as Linear or Integer Programming. Reasoning systems have been applied to constrained pattern and sequence mining tasks. A parallel development of metaheuristic approaches has taken place in the domains of Data Mining and Machine Learning. In the last decades\, methods aimed at high level combinatorial optimisation have been shown to strongly profit from configuration and tuning tools building on historical data. Algorithm selection has since the seventies of the previous century been considered as a tool to identify the most appropriate algorithm for a given instance. Empirical Model Learning uses Machine Learning models to approximate the behavior of a system\, and such empirical models can be embedded into an optimisation model for efficiently finding optimal system configurations. \nKeynote Speaker: Prof. dr. Holger Hoos (Leiden University\, NL)\nThe workshop co-chairs are: \n\nPatrick De Causmaecker (KU Leuven\, BE)\nMichele Lombardi (University of Bologna\, IT)\nYingqian Zhang (TU Eindhoven\, NL)
URL:https://www.euro-online.org/websites/dso/event/ijcai-2019-dso-workshop/
LOCATION:IJCAI 2019\, Macao\, China
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190623
DTEND;VALUE=DATE:20190627
DTSTAMP:20260404T050527
CREATED:20190121T153952Z
LAST-MODIFIED:20190121T153952Z
UID:424-1561248000-1561593599@www.euro-online.org
SUMMARY:Stream at EURO 2019: Data Science meets Optimization
DESCRIPTION:Building on the succes of the 2018 stream with 11 presentations\, 44 contributions and interesting discussions\, we set up a similar stream in this year’s edition. Again\, the two directions implied by the ‘meet’ keyword will be present. Concretely the following sessions have been initialised: \n\nThe Role of Mathematical Optimization in Data Science\,\nVanesa Guerrero Dolores Romero Morales \, submission code 89158f83\nThe Role Of Data Science in Optimization\,\nPatrick De Causmaecker\, submission code aaa9e1fb\nIntegrating Machine Learning in Optimization Methods\,\nKevin Tierney\, submission code ec2676b2\nData Science in Optimization Algorithms\,\nAndrew J. Parkes\, submission code 560846b3\nGraphs\, Data and Optimization\,\nPieter Leyman\, submission code 0c46541f\nOptimization of Machine Learning Models\,\nDimitri Papadimitriou\, 949f5b42\n\nContributors to any DSO activity in 2019 will be invited for a special issue later this year. \nSubmit to one of these through the conference website https://www.euro2019dublin.com . \nPlease\, feel free to use the appropriate submission code. \nOrganizors: \n\nEnder Özcan (ender.ozcan@nottingham.ac.uk )\nAndrew J. Parkes (ajp@cs.nott.ac.uk )\nDolores Romero Morales (drm.eco@cbs.dk )\nPatrick De Causmaecker (patrick.decausmaecker@kuleuven.be )
URL:https://www.euro-online.org/websites/dso/event/stream-at-euro-2019-data-science-meets-optimization/
LOCATION:Dublin\, Ireland
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180714
DTEND;VALUE=DATE:20180715
DTSTAMP:20260404T050527
CREATED:20170224T190657Z
LAST-MODIFIED:20181231T151444Z
UID:194-1531526400-1531612799@www.euro-online.org
SUMMARY:DSO workshop: FAIM 2018
DESCRIPTION:DSO Workshop at FEDERATED ARTIFICIAL INTELLIGENCE MEETING 2018 (DSO@FAIM) \nKeynote speakers        Schedule \nData science and optimisation are closely related. On the one hand\, many problems in data science (data mining\, machine learning\, statistical methods\, but also problems set in constraint programming) can be solved using optimisers\, on the other hand optimisation problems stated through classical models such as those from mathematical programming cannot be considered independent of historical data. Examples are ample. Machine learning often relies on optimisation techniques such as linear or integer programming. A parallel development of metaheuristic approaches has taken place in the domains of data mining and combinatorial optimisation. In the last decades\, methods aimed at high level combinatorial optimisation have been shown to strongly profit from configuration and tuning tools building on historical data. Algorithm selection has since the seventies of the previous century been considered as a tool to select the most appropriate algorithm for a given instance. One observation is that the models of combinatorial optimisation are incomplete and need extra information that may be implicit in the available data. Another observation is that data science methods employ algorithms from combinatorial optimisation without profiting from the latest developments. The aim of the current workshop is to bring scientists from the different fields together for a fruitful day of discussions. Feel free to visit Euro working group website. \nScope \nThe interaction of data science (DS) and optimisation (O) is the central theme of the working group (DSO). DSO originates from the observation that\, on the one hand\, real time optimisation algorithms are tightly linked to the data-context\, and on the other hand\, many data-analytic algorithms rely on optimisation algorithms\, while many modern optimisation algorithms have some form of machine learning embedded. The first observation has led to developments in automated algorithm tuning\, configuration and construction to adapt or even create algorithms from a historical body of data. The second observation is cause to development of similar ideas in different contexts but without much interaction. It is the aim of the working group to bridge gaps between the two domains. All contributors will be invited to send their paper to a special issue (to be announced). \nAim \nThe aim of the current workshop is to organise an open discussion and exchange of ideas by researchers from AI and OR domains. Authors are invited to send in a contribution in the form of a position paper. A reviewing panel will select the papers to be presented at the workshop according to their suitability to the aims. Finished work highlighting the opportunities will be welcomed\, as will be sound descriptions and elaborations on good ideas. \nPaper Submission \nPapers of size up to 4 pages are welcomed at the submission page.\nPlease use the IJCAI template. \nSpecial issue \nA post conference publication will be prepared\, contributors will be invited. \nImportant dates \nSubmission deadline 20 May 2018\nNotification of acceptance: 27 May 2018.
URL:https://www.euro-online.org/websites/dso/event/3/
LOCATION:Stockholm\,\, Sweden
CATEGORIES:Conferences
ATTACH;FMTTYPE=image/jpeg:https://www.euro-online.org/websites/dso/wp-content/uploads/sites/17/2016/12/aboutus.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180708
DTEND;VALUE=DATE:20180712
DTSTAMP:20260404T050527
CREATED:20190121T162138Z
LAST-MODIFIED:20190121T162138Z
UID:427-1531008000-1531353599@www.euro-online.org
SUMMARY:European Working Group: Data Science Meets Optimization
DESCRIPTION:Stream at EURO 2018 in Valencia \nOrganisers \n\nPatrick De Causmaecker (Patrick.DeCausmaecker@kuleuven.be)\nEnder Özcan (ender.ozcan@nottingham.ac.uk)\nAndrew J. Parkes (ajp@cs.nott.ac.uk)\nDolores Romero Morales (drm.eco@cbs.dk)\n\nList of abstracts: EURO2018 \nSessions \n\n\n\n\nThe Role of Mathematical Optimization in Data Science I\, Chair: Vanesa Guerrero\n\n\n\n\n\nIntegrating Machine Learning in Optimization Methods\, Chair: Kevin Tierney\n\n\n\n\n\nOptimization of Machine Learning Models\, Chair: Dimitri Papadimitriou\n\n\n\n\n\nThe Role of Mathematical Optimization in Data Science II\, Chair: Vanesa Guerrero\n\n\n\n\n\nThe Role of Mathematical Optimization in Data Science III\, Chair: Philipp Baumann\n\n\n\n\n\nThe Role of Mathematical Optimization in Data Science IV\, Chair: Vanesa Guerrero\n\n\n\n\n\nThe Role of Mathematical Optimization in Data Science V\,  Chair Adam Elmachtoub\n\n\n\n\n\nThe Role of Data Science in Optimization\, Chair Patrick De Causmaecker\n\n\n\n\n\nEvaluation as a Service for Optimization and Data Science\, Chair: Szymon Wasik\, Maciej Antczak\n\n\n\n\nGraphs\, Data and Optimization\, Chair Pieter Leyman\n\n\n\n\n\nData Science in Optimization Algorithms\, Chair: Andrew J. Parkes. Daniel Karapetyan\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n 
URL:https://www.euro-online.org/websites/dso/event/european-working-group-data-science-meets-optimization/
LOCATION:Valencia\, Spain
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20170717
DTEND;VALUE=DATE:20170722
DTSTAMP:20260404T050527
CREATED:20170331T131042Z
LAST-MODIFIED:20181231T142004Z
UID:223-1500249600-1500681599@www.euro-online.org
SUMMARY:DSO Stream at IFORS 2017
DESCRIPTION:Data Science meets Optimisation Streams 2017 \n\n\n\nCall for Abstracts \nStream on Data Science meets Optimisation \nQuebec city\, Quebec\, Canada  (July 17-21\, 2017) \nScope: \nThe interaction of data science (DS) and optimisation (O) is the central theme of the working group (DSO). DSO originates from the observation that\, on the one hand\, real time optimisation algorithms are tightly linked to the data-context\, and on the other hand\, many data-analytic algorithms rely on optimisation algorithms\, while many modern optimisation algorithms have some form of machine learning embedded. The first observation has a.o. led to developments in automated algorithm tuning\, configuration and construction to adapt or even create algorithms from a historical body of data.   The second observation is cause to development of similar ideas in different contexts but without much interaction. It is the aim of the working group to bring the two domains closer to each to better contribute to the aims of EURO\, the European Organisation for Operations Research. \nCo-located with \n21st  conference of international federation of operations research societies (IFORS 2017) \nQuebec city\, Quebec\, Canada  (July 17-21\, 2017)
URL:https://www.euro-online.org/websites/dso/event/test-all-day/
LOCATION:Quebec city\,\, Canada
CATEGORIES:Conferences
ATTACH;FMTTYPE=image/jpeg:https://www.euro-online.org/websites/dso/wp-content/uploads/sites/17/2016/12/slide3.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20170605
DTEND;VALUE=DATE:20170606
DTSTAMP:20260404T050527
CREATED:20170216T203252Z
LAST-MODIFIED:20181231T143243Z
UID:172-1496620800-1496707199@www.euro-online.org
SUMMARY:DSO workshop: CEC 2017 & CPAIOR 2017
DESCRIPTION:DSO workshop colocated with CEC 2017 and CPAIOR 2017\nINVITED SPEAKERS: \n\nOn the role of (machine) learning in (mathematical) optimization\nAndrea Lodi\n\nIn this talk\, I try to explain my point of view as a Mathematical Optimizer — especially concerned with discrete (integer) decisions — on Big Data. I advocate a tight integration of Machine Learning and Mathematical Optimization (among others) to deal with the challenges of decision-making in Data Science. For such an integration I concentrate on three questions: 1) what can optimization do for machine learning? 2) what can machine learning do for optimization? 3) which new applications can be solved by the combination of machine learning and optimization? Finally\, I will discuss in details two areas in which machine learning techniques have been (successfully) applied in the area of mixed-integer programming. [PDF] \n\nRelational Quadratic Programming: Exploiting Symmetries for Modelling and Solving Quadratic Programs\nKristian Kersting\n\nSymmetry is the essential element of lifted inference that has recently demonstrated the possibility to perform very efficient inference in highly-connected\, but symmetric probabilistic models models aka. relational probabilistic models. This raises the question\, whether this holds for optimization problems in general. In this talk I shall demonstrate that for a large class of mathematical programs this is actually the case. More precisely\, I shall introduce the concept of fractional symmetries of linear and convex quadratic programs (QPs)\, which lie at the heart of many machine learning approaches\, and exploit it to lift\, i.e.\, to compress them. These lifted QPs can then be tackled with the usual optimization toolbox (off-the-shelf solvers\, cutting plane algorithms\, stochastic gradients etc.): If the original QP exhibits symmetry\, then the lifted one will generally be more compact\, and hence their optimization is likely to be more efficient.[PDF] \nThis talk is based on joint works with Martin Mladenov\, Martin Grohe\, Leonard Kleinhans\, Pavel Tokmakov\, Babak Ahmadi\, Amir Globerson\, and many others.
URL:https://www.euro-online.org/websites/dso/event/event-1/
CATEGORIES:Conferences
ATTACH;FMTTYPE=image/jpeg:https://www.euro-online.org/websites/dso/wp-content/uploads/sites/17/2016/12/slide3.jpg
END:VEVENT
END:VCALENDAR