“Quantum supremacy” using a programmable superconducting processo

This topic is only tangentially related to gene-environment interactions, but, given the incredible amounts of data now available — during this past decade of DNA-sequencing, RNA-sequencing transcriptomics, and comparisons of hundreds of genomes from different species — the bigger and better and faster that “supercomputers” can become, the more of these abundant data can be accurately assessed in less time. “Quantum computers” promise to perform certain tasks much faster than ordinary (classical) computers. In essence, a quantum computer carefully orchestrates quantum effects (superposition, entanglement and interference) to explore a huge computational space, and ultimately converge on a solution, or solutions, to a problem. If the numbers of quantum bits (qubits) and operations reach even modest levels — then carrying out the same task on a state-of-the-art supercomputer becomes intractable on any reasonable timescale — a regimen termed quantum computational supremacy. However, reaching this level requires a robust quantum processor, because each additional imperfect operation incessantly chips away at overall “desired performance” (so-called ‘noise’ in the system).

It has therefore been questioned whether a sufficiently large quantum computer could ever be controlled in practice. But now, authors [see attached article and editorial] report quantum supremacy using a 53-qubit processor. Authors [from Google] chose a task that is related to random-number generation (i.e. sampling the output of a pseudo-random quantum circuit). This task is implemented by a sequence of operational cycles, each of which applies operations, called “gates”, to every qubit in an n-qubit processor. These operations include randomly selected single-qubit gates and prescribed two-qubit gates. The output is then determined by measuring each qubit. The resulting strings of 0’s and 1’s are not uniformly distributed over all 2n possibilities.

Instead, these strings have a preferential circuit-dependent structure — with certain strings being much more likely than others because of quantum entanglement and quantum interference. Repeating the experiment, and sampling a sufficiently large number of these solutions — results in a distribution of likely outcomes. Simulating this probability distribution on a classical computer, even using today’s leading algorithms, becomes exponentially more challenging — as the number of qubits and operational cycles is increased.

Authors [see attached article] used a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space; authors created quantum states on 53 qubits, corresponding to a computational state-space of dimension 253 (about 1,016). Measurements from repeated experiments sample the resulting probability distribution, which authors verified as “using classical simulations”. Their “Sycamore processor” required ~200 seconds to sample one instance of a quantum circuit a million times [current benchmarks indicate that the equivalent task for a state-of-the-art classical supercomputer would take ~10,000 years]. Authors claim that “this dramatic increase in speed, compared to all known classical algorithms, is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.”

HOWEVER, one week later, IBM issued a report contradicting these claims by Google. On 21 Oct 2019, IBM claimed their machine would do this task in only 2.5 days, using a different approach. “This would mean that Google’s machine had achieved an important milestone,” IBM said, but “not quantum supremacy.” Designers of quantum computers “seek a performance edge over classical computers by using strange aspects of quantum mechanics.” ☹

DwN

Nature 24 Oct 2019; 574: 505-510 & editorial pp 487-488

This entry was posted in Center for Environmental Genetics. Bookmark the permalink.