The study shows that quantum models need sufficient circuit depth to express non‑linear functions, yet they currently lag behind classical networks in speed and reliability, guiding future quantum‑ML investment decisions.
Variational quantum classifiers have attracted attention as a hybrid route to quantum machine learning, promising expressive models that leverage superposition and entanglement. The exclusive‑OR (XOR) problem, a classic test of non‑linear decision boundaries, serves as a minimal benchmark to gauge whether circuit depth can unlock that expressivity. In the recent study, a two‑qubit VQC with two layers of gates succeeded in learning the XOR mapping, confirming theoretical predictions that deeper circuits can approximate non‑linear functions that linear models like logistic regression cannot.
When measured against a conventional multilayer perceptron, the depth‑2 VQC reached identical test accuracy, but the classical network delivered lower binary cross‑entropy and completed training orders of magnitude faster. Hardware execution on a superconducting processor added a mean absolute deviation of roughly 0.118 to the decision function, highlighting systematic noise that persists even when the overall XOR structure is preserved. These findings underscore that, despite comparable accuracy, current quantum hardware imposes a performance penalty that outweighs any potential advantage for simple benchmarks.
The broader implication for the quantum‑ML community is clear: depth alone does not guarantee practical superiority. Future work must move beyond low‑dimensional tasks, address barren‑plateau optimization hurdles, and demonstrate tangible gains in robustness or computational efficiency on high‑dimensional, real‑world datasets. Only then can variational quantum classifiers transition from proof‑of‑concept to a competitive alternative to classical deep learning architectures.
Comments
Want to join the conversation?
Loading comments...