Quantum computers take an important step towards limiting errors | Science

A scheme to reduce the errors that plague quantum computers is one step closer to reality, researchers at Google announced today. Instead of ordinary bits that can be set to 0 or 1, a quantum computer uses qubits that can be set to 0 and 1 at the same time. But they are vulnerable. One tactic to protect the information carried by one chip is to spread it among many others. Now, the Google team has shown that they can reduce errors by spreading the information over more and more qubits. Such “scaling” is a critical step toward Google’s goal of indefinitely retaining a single value of information—a “logical” qubit—by encoding it onto 1000 physical ones.

“This is a significant proof-of-concept demonstration,” says Joschka Roffe, a theoretical physicist at the Free University of Berlin who was not involved in the experiment. Still, he notes that despite the scaling, Google’s logical qubit is still not as reliable as the underlying physical ones.

A full-fledged quantum computer could perform certain tasks, such as cracking the current internet encryption schemes, that a normal computer can do. Its qubits can be made of many things, such as ions, photons, and atoms. Google qubits are tiny circuits of superconducting metal that have a lower energy state that represents 0 and a higher one that represents 1. Microwaves can flip a circuit into either state – or into both states at the same time. However, noise typically destroys that bidirectional state in 20 microseconds, far too little time to run ambitious algorithms.

In an effort to strengthen the qubits, Google engineers are following an approach made in the 1940s to correct errors in the first computers, in which noise sometimes dropped slightly from 0 to 1, or vice versa. Say you copy one bit of information onto two other bits. All three are much less likely to turn into noise. And if one goes missing, the computer can figure out which one it was by comparing pairs of bits.

The laws of quantum mechanics forbid the same approach to be used in quantum computers. It is impossible to copy the state of one qubit on others. Furthermore, if a qubit is measured in a 0-and-1 state, it is reduced to either a 0 or a 1. Quantum error correction involves measuring much of the qubit’s information directly, and, instead of copied, the state of the original habit is extended. through a phenomenon called entanglement.

Take, for example, one qubit in a 0-and-1 state. Using entanglement, two more qubits can be roped in to form a quantum state where all three are 0 and at the same time all three are 1. Call it 000-and-111. The information in that state is the same as the original one and forms the logical qubit. Now, if, say, the second of these three data qubits flips, the state will be 010-and-101. To detect such a flip, researchers employ additional qubits between the first and second and second and third qubits. Measurements of those “ancillary” cubits reveal the reversed cubit in the original three cubits, which are never measured. In principle, researchers can relax the flipped cube back to its original state.

Now, the Google Quantum AI team has shown how the scheme improves when the information in the logical qubit is spread among more and more physical qubits. Using a 72-qubit chip, the team encoded one logical qubit in two ways – in either a grid of 17 qubits (nine data and eight ancillary qubits) or 49 qubits (25 data and 24 ancillary qubits). Researchers put each grid through 25 measurement cycles, looking for flipped cubits. Instead of correcting them, the researchers just kept an eye on them, which suited the experiment, says Julian Kelly, a physicist and director of quantum hardware at Google.

After the 25 cycles, they measured the data qubits directly to see if all or more of the ancillary qubits that tracked the flippers had sneaked in, meaning the machine lost track of the logical qubit. Over many trials, the probability of losing the logical qubit per cycle was 3.028% with the smaller grid and 2.914% with the larger one, the team reports today in nature. Thus, the error rate decreased as the number of physical qubits increased—albeit barely.

Those numbers can get overwhelming, as even a single physical qubit has a lower error rate. But the scaling is more important than the actual reliability of the logical qubit, says Kelly. “Scalability is really the trick,” he says. Still, to achieve Google’s goal of encoding a logical qubit onto 1000 physical ones with an error rate of 0.0001%, the scaling needs to be 20 times better.

Google’s experiment isn’t the only game in town, according to Greg Kuperberg, a mathematician at the University of California, Davis. A company called Quantinuum performed an experiment in which the logical qubit is stronger than the basic physical ones, using ion qubits, and physicists at Yale University did the same in an experiment that mixes superconducting qubits and photons. However, ion systems may not scale as easily, and the Yale system is an “apples to oranges” comparison, according to Kuperberg.

Still, Kuperberg says, the results show that physicists are on the threshold of using imperfect physical qubits to make much better logical ones. “I’m still going to say it’s the most important benchmark (in quantum computing) that I can think of right now.”

Leave a Reply

Your email address will not be published. Required fields are marked *