2703. A Diffusion-Driven Generative Framework for Pareto Front Learning
Invited abstract in session WB-51: Machine learning approaches in multiobjective decision making, stream Multiobjective and vector optimization.
Wednesday, 10:30-12:00Room: Parkinson B22
Authors (first author is the speaker)
| 1. | Sedjro Salomon Hotegni
|
| Computer Science, TU Dortmund University | |
| 2. | Sebastian Peitz
|
| Department of Computer Science, University of Paderborn |
Abstract
This work introduces a novel diffusion-driven generative framework for Pareto front learning in multi-objective optimization. By leveraging conditional denoising diffusion probabilistic models (DDPMs), our approach directly generates high-quality Pareto optimal solutions in the decision space. Rather than relying on explicitly labeled training data, we condition the model on objective function values by treating them as upper bounds, thereby encouraging the production of improved candidate solutions during sampling. To further steer the reverse diffusion process toward the Pareto front, we integrate a guidance mechanism based on Multiple Gradient Descent (MGD), which not only mimics a common descent direction for all objectives but also incorporates repulsive interactions to enhance diversity among solutions. Implemented with a diffusion transformer architecture and cross-attention conditioning, our method effectively combines generative modeling with gradient-based optimization principles. Experimental evaluations demonstrate the potential of our framework to efficiently approximate the Pareto front, offering a promising alternative to traditional multi-objective optimization techniques.
Keywords
- Multi-Objective Decision Making
- Machine Learning
- Complexity and Approximation
Status: accepted
Back to the list of papers