Neural networks speed up quantum state measurements – Physics World

Neural networks speed up quantum state measurements – Physics World

quantum-algorithm abstract
(Courtesy: iStock/Anadmist)

Neural networks can estimate the degree of entanglement in quantum systems far more efficiently than traditional techniques, a new study shows. By side-stepping the need to fully characterize quantum states, the new deep learning method could prove especially useful for large-scale quantum technologies, where quantifying entanglement will be crucial but resource limitations make full state characterization unrealistic.

Entanglement – a situation in which multiple particles share a common wavefunction, so that disturbing one particle affects all others – is at the heart of quantum mechanics. Measuring the degree of entanglement in a system is thus part of understanding how “quantum” it is, says study co-author Miroslav Ježek, a physicist at Palacký University in Czechia. “You can observe this behaviour starting from simple two-particle systems where the fundamentals of quantum physics are discussed,” he explains. “On the other hand, there is a direct link between, for example, changes of entanglement and phase transitions in macroscopic matter.”

The degree to which any two particles in a system are entangled can be quantified by a single number. Getting the exact value of this number requires reconstructing the wavefunction, but measuring a quantum state destroys it, so multiple copies of the same state must be measured over and over again. This is called quantum tomography in analogy to classical tomography, in which a series of 2D images is used to construct a 3D one, and it is an unavoidable consequence of quantum theory. “If you could learn about a quantum state from one measurement a qubit would not be a qubit – it would be a bit – and there would be no quantum communication,” says Ana Predojević, a physicist at Stockholm University, Sweden, and a member of the study team.

The problem is that the inherent uncertainty of a quantum measurement makes it extremely difficult to measure the entanglement between (for example) qubits in a quantum processor, since one must perform full multi-qubit wavefunction tomography on each qubit. Even for a small processor, this would take days: “You can’t do just one measurement and say whether you have entanglement or not,” says Predojević. “It’s like when people do a CAT [computed axial tomography] scan of your spine – you need to be in the tube 45 minutes so they can take the full image: you can’t ask whether there’s something wrong with this or that vertebra from a five minute scan.”

Finding good enough answers

Although calculating entanglement with 100% accuracy requires full quantum state tomography, several algorithms exist that can guess the quantum state from partial information. The problem with this approach, Ježek says, is “there is no mathematical proof that with some limited number of measurements you say something about entanglement at some precision level”.

In the new work, Ježek, Predojević and colleagues took a different tack, jettisoning the notion of quantum state reconstruction altogether in favour of targeting the degree of entanglement alone. To do this, they designed deep neural networks to study entangled quantum states and trained them on numerically generated data. “We randomly select quantum states and, having generated the state, we know the output of the network because we know the amount of entanglement in the system,” explains Ježek; “but we can also simulate the data that we would get during measurement of different numbers of copies from different directions…These simulated data are the input of the network.”

The networks used these data to teach themselves to make ever-better estimations of the entanglement from given sets of measurements. The researchers then checked the algorithm’s accuracy using a second set of simulated data. They found its errors were around 10 times lower than those of a traditional quantum tomography estimation algorithm.

Testing the method experimentally

Finally, the researchers experimentally measured two real entangled systems: a resonantly pumped semiconductor quantum dot and a spontaneous parametric down-conversion two-photon source. “We measured full quantum state tomography…and from this we knew everything about the quantum state,” says Ježek, “Then we omitted some of these measurements.” As they removed more and more measurements, they compared the error in the predictions of their deep neural networks with the errors from the same traditional algorithm. The error of the neural networks was significantly lower.

Ryan Glasser, a quantum optics expert at Tulane University in Louisiana, US, who has previously used machine learning to estimate quantum states, calls the new work “significant”. “One of the problems quantum technologies are running into right now is that we’re getting to the point where we can scale things to larger systems, and…you want to be able to fully understand your system,” Glasser says. “Quantum systems are notoriously delicate and difficult to measure and fully characterize…[The researchers] show that they can very accurately quantify the amount of entanglement in their system, which is very useful as we go to larger and larger quantum systems because nobody wants a two-qubit quantum computer.”

The group now plans to extend its research to larger quantum systems. Ježek is also interested in the inverse problem: “Let’s say we need to measure the entanglement of a quantum system with a precision of, say, 1%,” he says, “What minimum level of measurement do we need to get that level of entanglement estimation?”

The research is published in Science Advances.

Time Stamp:

More from Physics World