145. Gradient-free Optimization in Quantum Reinforcement Learning
Invited abstract in session TA-7: Quantum Computing & OR, stream Simulation and Quantum Computing.
Thursday, 8:45-10:15Room: U2-205
Authors (first author is the speaker)
| 1. | Maximilian Moll
|
| Universität der Bundeswehr München | |
| 2. | Stefan Klug
|
| Universität der Bundeswehr München |
Abstract
With quantum computing hardware improving at impressive speed, more and more ideas are being developed how NISQ machines can be used in practice. One popular idea is using quantum variational circuits in place of classical neural networks. While they are typically requiring much smaller models than their classical counterparts, their optimization can still lead to performance issues as the parameter-shift rule implies that circuit evaluations scale linearly in the number of parameters. With speed being essential problem on current machines, in particular on cloud devices, this can extend compute time significantly. Thus, recently, parameter-free approaches were explored with good results. Here, we investigate across several environments, how well these results transfer to quantum reinforcement learning. A particular focus will be on a fairly simple 1+1 evolutionary algorithm which has the added benefit of few hyper parameters. The performance across algorithms is being compared to that of traditional gradient-based training in terms of training quality as well as number of evaluations needed. Additional comparisons are being drawn between hyper parameter settings across different environments.
Keywords
- Prescriptive Analytics
- Algorithm Analysis
- Machine Learning
Status: accepted
Back to the list of papers