EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
2436. PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization
Invited abstract in session MD-3: (Deep) Reinforcement Learning for Combinatorial Optimization 2, stream Data Science Meets Optimization.
Monday, 14:30-16:00Room: 1005 (building: 202)
Authors (first author is the speaker)
1. | André Hottung
|
Decision and Operation Technologies, Bielefeld University | |
2. | Kevin Tierney
|
Decision and Operation Technologies, Bielefeld University |
Abstract
Reinforcement learning-based methods for constructing solutions to combinatorial optimization problems are rapidly approaching the performance of human-designed algorithms. To further narrow the gap, learning-based approaches must efficiently explore the solution space during the search process. Recent approaches artificially increase exploration by enforcing diverse solution generation through handcrafted rules, however, these rules can impair solution quality and are difficult to design for more complex problems. In this paper, we introduce PolyNet, an approach for improving exploration of the solution space by learning complementary solution strategies. In contrast to other works, PolyNet uses only a single-decoder and a training schema that does not enforce diverse solution generation through handcrafted rules. We evaluate PolyNet on four combinatorial optimization problems and observe that the implicit diversity mechanism allows PolyNet to find better solutions than approaches the explicitly enforce diverse solution generation.
Keywords
- Artificial Intelligence
- Combinatorial Optimization
- Machine Learning
Status: accepted
Back to the list of papers