EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
139. Integrating Simulation, Optimization and Reinforcement Learning for a General Class of Scheduling Problems
Invited abstract in session WC-60: Machine Learning in Machine Scheduling, stream Project Management and Scheduling.
Wednesday, 12:30-14:00Room: S09 (building: 101)
Authors (first author is the speaker)
1. | Haitao Li
|
University of Missouri - St. Louis |
Abstract
We study a general class of stochastic scheduling problems that include various machine scheduling and resource-constrained project scheduling problems under uncertainty as special cases. There are two general strategies to solve the addressed class of scheduling problems with stochastic activity durations: open-loop and closed-loop. Although closed-loop policy is theoretically advantageous over open-loop policy, as computing an optimal closed-loop policy requires solving the Bellman equation, which suffers the well-known “curse-of-dimensionalities” for large instances. In this paper, a Markov decision process (MDP) model built upon discrete-time Markov chain (DTMC) is developed for the addressed class of problems. To tackle the curse-of-dimensionalities of obtaining the exact closed-loop policy to the MDP model, we present a general approximate dynamic programming (ADP) framework that integrates simulation, optimization, and reinforcement learning, called Sim-Opt-RL, to provide quality and computationally tractable closed-loop policy. We implement the Sim-Opt-RL framework for a well-known and well-studied stochastic resource-constrained project scheduling (SRCPSP), with a custom designed genetic algorithm (GA), which outperforms the existing closed-loop algorithm, and is competitive with the state-of-the-art open-loop algorithms for the SRCPSP.
Keywords
- Project Management and Scheduling
- Programming, Integer
- Scheduling
Status: accepted
Back to the list of papers