237. Optimizing Medical Drone Dispatch and Charging Operations Using Deep Reinforcement Learning
Invited abstract in session TC-12: Insights through Unsupervised Learning, stream Artificial Intelligence, Machine Learning and Optimization.
Thursday, 11:45-13:15Room: H10
Authors (first author is the speaker)
| 1. | Abolfazl Maleki
|
| Department of Industrial Engineering and Business Information Systems (IEBIS), University of Twente | |
| 2. | Amin Asadi
|
| Industrial Engineering & Management (IEBIS), University of Twente | |
| 3. | Breno Alves Beirigo
|
| Industrial Engineering and Business Information Systems, University of Twente | |
| 4. | Derya Demirtas
|
| Industrial Engineering & Business Information Systems, University of Twente | |
| 5. | Erwin W. Hans
|
| Industrial Engineering & Business Information Systems, University of Twente, fac. Behavioral Management and Social Science |
Abstract
Drones have become a vital solution for healthcare logistics, enabling the delivery of medical supplies to remote or disaster-affected areas where traditional transportation methods (e.g., trucks) face significant challenges. It is crucial to manage limited resources and minimize operational costs to ensure an efficient drone delivery, usually facing demand uncertainty. Delivery drones, characterized by their limited flight range, require slow/fast recharging to maintain continuous delivery operations. Nonetheless, the trade-offs in charging strategies introduce conflicting objectives: excessive use of fast charging can ensure high drone availability but accelerates battery degradation, while reliance on slow charging preserves battery life but may reduce the number of drones ready for deployment.
To address this issue, we propose charging/dispatching policies that balance drone availability to satisfy medical demands with operational charging costs. Specifically, we introduce a Stochastic Multi-class Allocation and Recharging model for Transportation (SMART), which optimizes medical item deliveries by considering both fast- and slow-charging strategies. We formulate this problem using a novel Markov Decision Process (MDP) framework. To efficiently optimize dispatching (i.e., selecting which drones to send on missions) and recharging decisions (i.e., determining when and how many batteries to charge via either slow or fast chargers), we then develop a deep reinforcement learning approach. Computational experiments demonstrate the effectiveness of our model, providing valuable insights into improving the sustainability and cost-efficiency of drone-based healthcare logistics.
Keywords
- Artificial Intelligence
- Stochastic Models
- Logistics
Status: accepted
Back to the list of papers