A scheme to reduce the errors that plague quantum computers is a step closer to reality, researchers at Google announced today. Instead of ordinary bits that can be set to 0 or 1, a quantum computer uses qubits that can be set to 0 and 1 at the same time. But they are fragile. One tactic for protecting the information carried by one qubit is to spread it out over many others. Now, the Google team has shown it can reduce errors by spreading the information over more and more qubits. Such “scaling” marks a key step toward Google’s goal of maintaining indefinitely one qubit’s worth of information—a “logical” qubit—by encoding it on 1000 physical ones.

“This is a significant proof-of-concept demonstration,” says Joschka Roffe, a theoretical physicist at the Free University of Berlin who was not involved in the experiment. Still, he notes, in spite of the scaling, Google’s logical qubit isn’t yet as reliable as the underlying physical ones.

A full-fledged quantum computer could perform certain tasks, such as cracking current internet encryption schemes, that overwhelm a conventional computer. Its qubits can be fashioned of many things, such as ions, photons, and atoms. Google’s qubits are tiny circuits of super-conducting metal that have a lower energy state denoting 0 and a higher one denoting 1. Microwaves can coax a circuit into either state—or into both at once. However, noise tends to destroy that two-way state in 20 microseconds, far too little time to run ambitious algorithms.

In an effort to fortify the qubits, Google engineers are following a tack taken in the 1940s for correcting errors in the first computers, in which noise sometimes flipped a bit from 0 to 1, or vice versa. Suppose you copy a bit’s information onto two other bits. The probability that noise flips all three is far smaller. And if one flips, the computer can figure out which one it was by comparing pairs of bits.

The laws of quantum mechanics forbid using the exact same approach in quantum computers. It’s impossible to copy the state of one qubit onto others. Moreover, measuring a qubit in a 0-and-1 state collapses it to either 0 or 1. Quantum error correction involves subtle workarounds, where a qubit’s information is never measured directly, and, instead of being copied, the original qubit’s state is expanded through a phenomenon called entanglement.

Take, for example, a single qubit in a 0-and-1 state. Using entanglement, two other qubits can be roped in to make a quantum state in which all three are 0 and simultaneously all three are 1. Call it 000-and-111. The information in that state is the same as in the original one and forms the logical qubit. Now, if, say, the second of these three data qubits flips, the state will become 010-and-101. To detect such a flip, researchers entangle additional qubits between the first and second and the second and third qubits. Measurements on those “ancillary” qubits reveal the flipped qubit in the original trio of qubits, which are never measured. In principle, researchers can ease the flipped qubit back to its original state.

Now, the Google Quantum AI team has shown how the scheme improves when the information in the logical qubit is spread among more and more physical qubits. Using a 72-qubit chip, the team encoded a single logical qubit in two ways—in either a grid of 17 qubits (nine data and eight ancillary qubits) or 49 qubits (25 data and 24 ancillary qubits). Researchers put each grid through 25 cycles of measurements, looking for flipped qubits. Instead of correcting them, researchers just kept track of them, which sufficed for the experiment, says Julian Kelly, a physicist and director of quantum hardware at Google.

After the 25 cycles, they measured the data qubits directly to see whether the ancillary qubits tracked all the flips or more had sneaked in, meaning the machine lost track of the logical qubit. Over many trials, the probability per cycle of losing the logical qubit was 3.028% with the smaller grid and 2.914% with the bigger one, the team reports today in Nature. Thus, the error rate shrank as the number of physical qubits increased—although just barely.

Those numbers may underwhelm, as even a single physical qubit has a lower error rate. But the scaling is more important than the actual reliability of the logical qubit, Kelly says. “The scalability is really the trick,” he says. Still, to reach Google’s goal of encoding a logical qubit on 1000 physical ones with an error rate of 0.0001%, the scaling must be 20 times better.

Google’s experiment is not the only game in town, notes Greg Kuperberg, a mathematician at the University of California, Davis. A company called Quantinuum has performed an experiment in which the logical qubit is more robust than the underlying physical ones, using ion qubits, and physicists at Yale University have done the same in an experiment that mixes superconducting qubits and photons. However, ion systems may not scale as easily, and the Yale system is something of an “apples and oranges” comparison, Kuperberg notes.

Still, Kuperberg says, the results show physicists are on the threshold of using imperfect physical qubits to make much better logical ones. “I’m still going to call that the most important benchmark [in quantum computing] I can think of right now.”

Source: https://www.science.org/content/article/quantum-computers-take-key-step-toward-curbing-errors