Paper ID: 2307.12449
DyPP: Dynamic Parameter Prediction to Accelerate Convergence of Variational Quantum Algorithms
Satwik Kundu, Debarshi Kundu, Swaroop Ghosh
The exponential run time of quantum simulators on classical machines and long queue times and high costs of real quantum devices present significant challenges in the efficient optimization of Variational Quantum Algorithms (VQAs) like Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA) and Quantum Neural Networks (QNNs). To address these limitations, we propose a new approach, DyPP (Dynamic Parameter Prediction), which accelerates the convergence of VQAs by exploiting regular trends in the parameter weights to update parameters. We introduce two techniques for optimal prediction performance namely, Naive Prediction (NaP) and Adaptive Prediction (AdaP). Through extensive experimentation and training of multiple QNN models on various datasets, we demonstrate that DyPP offers a speedup of approximately $2.25\times$ compared to standard training methods, while also providing improved accuracy (up to $2.3\%$ higher) and loss (up to $6.1\%$ lower) with low storage and computational overheads. We also evaluate DyPP's effectiveness in VQE for molecular ground-state energy estimation and in QAOA for graph MaxCut. Our results show that on average, DyPP leads to speedup of up to $3.1\times$ for VQE and $2.91\times$ for QAOA, compared to traditional optimization techniques, while using up to $3.3\times$ lesser shots (i.e., repeated circuit executions). Even under hardware noise, DyPP outperforms existing optimization techniques, delivering upto $3.33\times$ speedup and $2.5\times$ fewer shots, thereby enhancing efficiency of VQAs.
Submitted: Jul 23, 2023