Robust Quantum Machine Learning Achieves Increased Accuracy on MNIST and FMNIST Datasets
QuantumAI

Robust Quantum Machine Learning Achieves Increased Accuracy on MNIST and FMNIST Datasets

Quantum Zeitgeist
Quantum ZeitgeistJan 20, 2026

Why It Matters

The approach cuts resource demands and boosts security for quantum machine‑learning models, accelerating their readiness for real‑world applications. It demonstrates a scalable path for robust QML on noisy intermediate‑scale quantum (NISQ) devices.

Robust Quantum Machine Learning Achieves Increased Accuracy on MNIST and FMNIST Datasets

Chris Nakhl, Maxwell West, and Muhammad Usman · School of Physics, University of Melbourne

The efficient encoding of classical data onto quantum devices represents a significant challenge in the advancement of quantum machine learning. The authors address this problem by introducing a novel approach that utilizes Matrix Product States (MPS) to construct encoding circuits. Their research demonstrates a method for creating low‑depth, approximate encodings that not only maintain classification accuracy but also enhance robustness against adversarial attacks. This work is illustrated through successful applications to benchmark datasets such as MNIST and FMNIST, alongside a practical demonstration on superconducting hardware, paving the way for more resilient and scalable quantum machine learning algorithms.


Background

Quantum Machine Learning (QML) requires efficient loading of classical information onto a quantum processor. By leveraging the MPS representation of quantum systems, the authors construct encoding circuits that move beyond traditional techniques such as basis or angle encoding, offering a potentially more scalable solution for complex datasets. The team successfully implemented this encoding scheme, preparing desired quantum states with reduced computational cost.

The study reveals a method for approximating quantum states with lower circuit depth by iteratively building circuits from the MPS representation, avoiding heuristic methods often employed in variational encoding. The resulting circuits encode an input vector as a superposition of states, requiring exponentially fewer qubits than conventional methods that assign one qubit per feature. Crucially, this MPS‑assisted state preparation not only reduces circuit complexity but also demonstrably increases robustness against classical adversarial attacks—a significant advancement for secure QML applications. Experiments show that circuit depth can be decreased without increasing the number of qubits needed for computation.


Demonstrations

The breakthrough is illustrated through adversarially robust variational quantum classifiers trained on the MNIST and FMNIST datasets. The authors utilised Singular Value Decomposition (SVD) and reshaping techniques to build the MPS from classical input vectors, effectively decomposing matrices into a product of smaller matrices. By strategically discarding insignificant singular values, they reduced memory requirements and streamlined quantum circuit construction. This innovative use of SVD enables a more efficient representation of the quantum state, paving the way for more complex QML algorithms.

A small‑scale experimental demonstration on a superconducting quantum device confirmed the feasibility of implementing the MPS‑based encoding in a real‑world quantum‑computing environment. The research establishes that states represented as MPS encode entanglement in a manner that can be directly probed, offering insights into the quantum properties of the encoded data. The work opens new avenues for developing resilient QML algorithms and exploring the potential of quantum computing for robust data analysis and classification tasks, particularly in scenarios vulnerable to adversarial manipulation.


Matrix Product State Quantum State Preparation

The team engineered a methodology for encoding classical data onto quantum devices by leveraging the MPS representation to construct efficient quantum circuits. Preparing arbitrary quantum states typically requires circuits with depth scaling as O(4ⁿ) (where n is the number of qubits). Instead of heuristic methods, the study pioneered MPS‑assisted state preparation, iteratively refining circuits to approximate the desired state without relying on variational or genetic algorithms.

The quantum state is expressed as

[

|\psi\rangle = \sum_{i_1,i_2,\dots,i_N} A^{(1)}{i_1} A^{(2)}{i_2}\dots A^{(N)}_{i_N},|i_1 i_2 \dots i_N\rangle,

]

where N denotes the total number of qubits and each (A^{(s)}) is a matrix of size (\chi \times \chi) (the bond dimension). The authors used SVD as a core operation, reshaping an initial input vector of size (2^N) into a (2 \times 2^{N-1}) matrix, then performing SVD to obtain matrices of dimensions (2 \times s_0), (s_0 \times s_0), and (s_0 \times 2^{N-1}). The (2 \times s_0) matrix becomes (A^{(0)}). This process is repeated iteratively: the (V^\dagger) matrix from each decomposition is reshaped to (2 s_{i-1} \times 2^{N-i}), decomposed again, and the resulting (U) matrix is stored as (A^{(i)}). By discarding small singular values, the method yields low‑depth circuits and a memory‑efficient state representation.

Applying this technique to the MNIST and FMNIST datasets, the researchers constructed adversarially robust variational classifiers. The small‑scale experimental validation on a superconducting device confirmed the efficacy of the MPS‑assisted encoding and its ability to enhance robustness against classical adversarial attacks—a crucial step toward practical quantum machine learning.


MPS Encoding Boosts Quantum Classification Robustness

The novel MPS representation enables the construction of low‑depth circuits that encode desired quantum states, a critical factor for near‑term quantum hardware. Experiments demonstrate that this encoding maintains classification accuracy while enhancing robustness against classical adversarial attacks, improving security and reliability of quantum algorithms.

Key points:

  • SVD‑based decomposition – Input vectors of size (2^N) are reshaped to (2 \times 2^{N-1}) matrices, decomposed via SVD, yielding matrices of sizes (2 \times s_0), (s_0 \times s_0), and (s_0 \times 2^{N-1}). The bond dimension (s_0) controls memory usage and circuit complexity.

  • Gate optimisation – Choosing (k=2) for reduced‑density‑matrix decomposition results in two‑qubit unitaries that can be implemented with at most three CNOT gates and six single‑qubit rotations, minimising circuit depth on hardware with limited connectivity.

  • Iterative state preparation – Even when the algorithm is terminated before full disentanglement, the probability of measuring (|0\rangle) at each site increases, providing a controllable approximation level.

  • Nearest‑neighbour architecture – The resulting circuits involve only nearest‑neighbour operations, offering advantages for many quantum hardware platforms.

These results provide a promising pathway toward scalable and robust quantum machine learning algorithms.


Robust Data Encoding for Quantum Machine Learning

The work introduces a general, noise‑resilient encoding strategy based on MPS that facilitates low‑depth, approximate encodings while preserving, and even enhancing, robustness against classical adversarial attacks. Validation on MNIST, FMNIST, and a superconducting device shows significantly improved accuracy compared with existing methods.

While bespoke encodings may be possible for specific tasks, a universal solution is needed that does not compromise the potential advantages of quantum algorithms. Leveraging the inherent noise resilience of quantum systems through approximate encoding offers a promising route, simultaneously improving performance and security.

The authors note a limitation: the advantage of this encoding against quantum adversaries with access to quantum devices or prior knowledge of the algorithm remains an open question. Future research should investigate this potential vulnerability and explore broader applicability to more complex datasets and QML algorithms.

Comments

Want to join the conversation?

Loading comments...