
The technique delivers a low‑cost, software‑only fix that enhances the reliability of near‑term quantum simulations, accelerating quantum chemistry and materials research on NISQ devices.
Instability has long plagued variational quantum algorithms, with barren‑plateau gradients and sensitivity to initial parameters limiting their practical use on noisy intermediate‑scale quantum (NISQ) hardware. By borrowing a staple from classical machine learning—L₂‑squared‑norm regularisation—researchers introduce a gentle bias toward smoother regions of the parameter space. This simple additive term conditions the optimisation landscape, mitigating curvature spikes that otherwise derail gradient‑based searches, and does so without any changes to the underlying quantum circuit or hardware architecture.
The study implements a two‑stage optimisation protocol: an exploratory phase where the regularisation strength follows a cosine‑decay schedule, followed by an unregularised refinement stage. Extensive numerical experiments on H₂, LiH, and a Random‑Field Ising Model, executed on the Devana supercomputer with state‑vector simulators, reveal a non‑monotonic relationship between λ and performance. Within a clearly defined “stabilisation window,” success rates surpass 90% of the theoretical optimum, parameter norms contract, and median final energies improve, regardless of whether L‑BFGS‑B or conjugate‑gradient optimisers are employed.
For industry, this finding translates into a cost‑effective pathway to more reliable quantum simulations, especially in quantum chemistry and materials design where VQEs are a primary tool. Because the regularisation is purely classical, existing quantum software stacks can adopt it instantly, accelerating the timeline for actionable insights from NISQ devices. Future work may explore adaptive λ schedules that respond dynamically to gradient information, extending the benefits to other variational algorithms such as QAOA and VQC, and further narrowing the gap between theoretical promise and experimental reality.
Comments
Want to join the conversation?
Loading comments...