Irtaza Khalid, Carrie Weidner, S.G. Schirmer, Edmond Jonckheere, Frank C. Langbein. Sample-efficient Model-based Reinforcement Learning for Quantum Control. Poster, BQIT:23, 2023. [PDF:poster]
- Our model-based reinforcement learning (RL) algorithm reduces the sample complexity for time-dependent noisy quantum gate control tasks by at least an order of magnitude over model-free RL.
- The model is a differentiable ordinary differential equation (ODE) within our Learnable Hamiltonian Model-Based Soft-Actor Critic (LH-MBSAC) algorithm.
- We encode a partially characterised Hamiltonian in the model and only learn the time-independent term.
- The learned model can be leveraged to further optimize RL controllers using GRAPE.
- LH-MBSAC is a step towards bridging the gap between theoretical and experimental quantum control by reducing the experimental resource requirements for RL control.
Cite this page as 'Frank C Langbein, "BQIT:23 – Sample-efficient Model-based Reinforcement Learning for Quantum Control," Ex Tenebris Scientia, 10th April 2023, https://langbein.org/bqit23/ [accessed 21st December 2024]'.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.