Paper ID: 2212.14426

Restricting to the chip architecture maintains the quantum neural network accuracy

Lucas Friedrich, Jonas Maziero

In the era of noisy intermediate-scale quantum devices, variational quantum algorithms (VQAs) stand as a prominent strategy for constructing quantum machine learning models. These models comprise both a quantum and a classical component. The quantum facet is characterized by a parametrization $U$, typically derived from the composition of various quantum gates. On the other hand, the classical component involves an optimizer that adjusts the parameters of $U$ to minimize a cost function $C$. Despite the extensive applications of VQAs, several critical questions persist, such as determining the optimal gate sequence, devising efficient parameter optimization strategies, selecting appropriate cost functions, and understanding the influence of quantum chip architectures on the final results. This article aims to address the last question, emphasizing that, in general, the cost function tends to converge towards an average value as the utilized parameterization approaches a $2$-design. Consequently, when the parameterization closely aligns with a $2$-design, the quantum neural network model's outcome becomes less dependent on the specific parametrization. This insight leads to the possibility of leveraging the inherent architecture of quantum chips to define the parametrization for VQAs. By doing so, the need for additional swap gates is mitigated, consequently reducing the depth of VQAs and minimizing associated errors.

Submitted: Dec 29, 2022