BQIT:23 – Sample-efficient Model-based Reinforcement Learning for Quantum Control


Irtaza Khalid, Carrie Weidner, S.G. Schirmer, Edmond Jonckheere, Frank C. Langbein. Sample-efficient Model-based Reinforcement Learning for Quantum Control. Poster, BQIT:23, 2023. [PDF:poster]

  1. Our model-based reinforcement learning (RL) algorithm reduces the sample complexity for time-dependent noisy quantum gate control tasks by at least an order of magnitude over model-free RL.
  2. The model is a differentiable ordinary differential equation (ODE) within our Learnable Hamiltonian Model-Based Soft-Actor Critic (LH-MBSAC) algorithm.
  3. We encode a partially characterised Hamiltonian in the model and only learn the time-independent term.
  4. The learned model can be leveraged to further optimize RL controllers using GRAPE.
  5. LH-MBSAC is a step towards bridging the gap between theoretical and experimental quantum control by reducing the experimental resource requirements for RL control.
Cite this page as 'Frank C Langbein, "BQIT:23 – Sample-efficient Model-based Reinforcement Learning for Quantum Control," Ex Tenebris Scientia, 10th April 2023, https://langbein.org/bqit23/ [accessed 6th December 2024]'.

CC BY-NC-SA 4.0 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.