Fault-Tolerant Quantum Computing: Novel Protocol Efficiently Reduces Resource Cost
Quantum

Fault-Tolerant Quantum Computing: Novel Protocol Efficiently Reduces Resource Cost

Phys.org (Quantum Physics News)
Phys.org (Quantum Physics News)Jan 5, 2026

Why It Matters

The protocol breaks the long‑standing space‑time trade‑off, paving the way for scalable, faster quantum computers and accelerating commercial quantum‑technology roadmaps.

Fault-tolerant quantum computing: Novel protocol efficiently reduces resource cost

By Ingrid Fadelli · Published January 5 2026

Image 1: New protocol efficiently reduces resource cost of fault‑tolerant quantum computing

Conceptual overview of our fault‑tolerant protocol. By combining QLDPC codes, which achieve reduced hardware scale, with concatenated codes, which enable fast computation, we realize a low‑overhead fault‑tolerant quantum computer. Credit: Shiro Tamiya.

Quantum computers, systems that process information leveraging quantum mechanical effects, could soon outperform classical computers on some complex computational problems. These computers rely on qubits, units of quantum information that share states with each other via a quantum mechanical effect known as entanglement.

Qubits are highly susceptible to noise in their surroundings, which can disrupt their quantum states and lead to computation errors. Quantum engineers have thus been trying to devise effective strategies to achieve fault‑tolerant quantum computation, or in other words, to correct errors that arise when quantum computers process information.

Existing approaches work either by reducing the extra number of physical qubits needed per logical qubit (i.e., space overhead) or by reducing the number of physical operations needed to perform a single logical operation (i.e., time overhead). Effectively tackling both these goals together, which would enable more scalable systems and faster computations, has so far proved challenging.

In a paper published in Nature Physics, researchers at the University of Tokyo and Nanofiber Quantum Technologies Inc. introduce a new protocol that could enable fault‑tolerant quantum computing by tackling both space and time overhead.

The new protocol combines two distinct types of codes for the correction of quantum errors, known as quantum low‑density parity‑check (QLDPC) and concatenated Steane codes.

“The inspiration for our study came from a fundamental dilemma in fault‑tolerant quantum computation: the conflict between hardware scale and computational speed,” Shiro Tamiya, first author of the paper, told Phys.org.

“In technical terms, this corresponds to the trade‑off between space overhead (the number of physical qubits per logical qubit) and time overhead (the number of physical operations per logical operation).”

Tackling the trade‑off between space and time overhead

Typically, protocols to achieve fault‑tolerant quantum computing either achieve fast computations or improve the efficiency of hardware in respect to its scale. Improving one of these qualities tends to adversely impact the other.

Conventional protocols, such as those based on so‑called surface codes, achieve fast computation and moderately efficient hardware scaling. In contrast, existing protocols that rely on high‑rate QLDPC codes tend to suppress the scale of hardware, while slowing down computations significantly.

“A recent approach by my co‑authors relied on concatenated quantum Hamming codes, which improved the speed while maintaining a suppressed hardware scale (constant space overhead), but does not fully match the performance of traditional methods,” said Tamiya. “Our objective was to finally resolve this dilemma.”

Tamiya and the team ultimately proved that it is possible to maintain a suppressed hardware scale while improving a system’s computational speed. To achieve this, they created a new “hybrid” protocol that combines two established types of quantum codes.

“We use QLDPC codes to store quantum information efficiently because they require very few physical qubits,” explained Tamiya.

“However, due to their complex structure, executing logical operations on QLDPC codes is typically difficult. In contrast, concatenated codes feature a simple nested structure that enables fast computation. We thus used concatenated codes to rapidly generate auxiliary quantum states that assist in executing logical operations on QLDPC codes.”

The researchers’ approach essentially assigns different roles to the two distinct types of error‑correcting codes they used. QLDPC codes were used to store qubits efficiently, while concatenated codes performed the computations.

A hybrid solution to a long‑standing challenge

In initial tests, the new protocol was found to successfully minimize the number of qubits required to perform computations, while maximizing the speed with which a system can compute information.

“From a fundamental perspective, our work addresses a long‑standing open question regarding the space‑time trade‑off in fault‑tolerant quantum computation,” said Tamiya.

“Previously, there appeared to be a dilemma where minimizing space overhead required a significant sacrifice in time overhead. We demonstrated that it is possible to overcome this barrier, establishing a new theoretical understanding that low overheads in both space and time are simultaneously achievable.”

In addition to demonstrating that achieving constant space overhead and polylogarithmic time overhead is possible, the team delineated the threshold theorem for their protocol—a mathematical result that guarantees reliable computation when noise stays below a certain level.

“To prove this theorem, we introduced a new analytical technique called partial circuit reduction, providing a unified framework to analyze our protocol, which integrates QLDPC codes and concatenated codes—two distinct types of codes with different error‑suppression mechanisms,” explained Tamiya.

“Furthermore, the computational cost of running decoders for quantum error correction in real‑time is a major bottleneck in fault‑tolerant quantum computation.”

Contributing to the advancement of quantum technologies

The recent work could soon inform the development of other hybrid error‑correction solutions that improve the overall performance and efficiency of quantum computers. The team proved the new protocol’s potential in theoretical analyses and plan to further validate it experimentally.

“Unlike previous theoretical works that often assume instantaneous decoding, our proof explicitly accounts for the finite time costs of classical decoding and the errors that accumulate during the decoding process,” said Tamiya. “By confirming that fault tolerance holds up even under these realistic constraints, we provide a solid theoretical foundation for building practical, large‑scale quantum computers.”

The researchers now plan to continue improving their protocol and assessing its potential in further tests. In the future, it could be applied to real quantum‑computing systems and evaluated in real‑world scenarios.

“In the race to build quantum computers, superconducting qubits have traditionally been the main focus,” added Tamiya. “Yet platforms like neutral atoms and trapped ions are now also attracting a lot of attention, as they offer all‑to‑all connectivity—a feature that makes implementing low‑overhead fault‑tolerant protocols like ours much more feasible.

These systems come with unique physical constraints, such as the restrictions on how atoms can be moved and the time required for such movements. A key direction for future research is to bridge the gap between our theory and these experimental realities, clarifying exactly what is needed to implement our protocol on these promising hardware platforms.”

Written by Ingrid Fadelli, edited by Sadie Harley, fact‑checked and reviewed by Robert Egan.

Publication details

Shiro Tamiya et al., Fault‑tolerant quantum computation with polylogarithmic time and constant space overheads, Nature Physics (2025). DOI: 10.1038/s41567-025-03102-5.

Comments

Want to join the conversation?

Loading comments...