by Alex Neville and Chris Sparrow (University of Bristol)

Boson sampling has emerged as a leading candidate for demonstrating “quantum computational supremacy”. We have devised improved classic al algorithms to solve the problem, and shown that photon loss is likely to prevent a near-term demonstration of quantum computational supremacy by boson sampling.

The prospect of harnessing precisely engineered quantum systems to run algorithms which vastly outperform their best classical counterparts has generated a large amount of interest and investment in the field of quantum computing. However, a universal and fault-tolerant quantum computer is unlikely to be built soon. Considering this potentially long wait, recently several new problems have been proposed with the purpose of demonstrating “quantum computational supremacy”, which refers to the point at which a quantum machine performs a computational task that is beyond the capability of any classical computer [1], with specialised near-term quantum devices.

One such proposal is the boson sampling problem introduced by Aaronson and Arkhipov [2]. The problem consists of sampling from the output distribution of detection events generated when many single photons are concurrently injected in to a randomly chosen network of linear optical components. A sample value is generated by recording where photons were detected at the end of the linear optical network. The probability for each possible output in the experiment is related to a matrix function known as the permanent, which is in general especially hard to compute. By making the plausible conjecture that the permanent is hard to approximate for Gaussian random matrices, as well as another reasonable conjecture about the distribution of these permanents, Aaronson and Arkhipov were able to show that an efficient classical algorithm for even approximate Boson sampling would lead to the collapse of the polynomial hierarchy of complexity classes to its third level – something considered extremely unlikely in computational complexity theory. The result applying to approximate boson sampling is key, as it allows for the possibility of solving the boson sampling problem with realistic quantum experiments, without the need for quantum error correction. Physically, the complexity of the problem stems from the complex quantum interference of indistinguishable photons; if we make the photons distinguishable (say, by using photons of different colours) then the problem is no longer hard.

Our best new classical boson sampling algorithm [3] is based on Metropolised Independence Sampling (MIS), a Markov chain Monte Carlo procedure similar to the famous Metropolis-Hastings algorithm. We start a list by proposing a sample value from some easy-to-sample distribution, and continue by iteratively proposing new values and either adding this new value to the list or repeating our previous value. Which of these possibilities actually occurs is determined by a carefully crafted acceptance rule which depends on the probabilities of the proposed and current value occurring in both the target distribution and the proposal distribution, and guarantees that the list tends towards a genuine boson sampling sample.

We identified the corresponding distribution for distinguishable particles at the output of the linear optical network as a proposal distribution which provides rapid convergence to the boson sampling distribution. Although the probabilities in this distribution are also given by matrix permanents (albeit different matrices), the distribution can be efficiently sampled. Using this, we found strong numerical evidence that computing just 200 matrix permanents is enough to generate a boson sampling value via MIS for a problem size of up to 30 photons, roughly amounting to a speed-up of 50 orders of magnitude over the best previously suggested classical algorithm at this problem size.

By assuming that 200 matrix permanent computations suffice to produce a good sample for larger photon numbers, we were able to predict the amount of time it would take to solve boson sampling for up to 100 photons if we ran MIS on a powerful supercomputer. Alongside this, we computed the expected time to experimentally perform boson sampling as a function of the probability of a single photon surviving the experiment (i.e., not getting lost from source to detector). We found that it would require more than 50 photons before the classical runtime would exceed a week. To achieve this experimentally would not only require the ability to produce a state of 50 indistinguishable photons at a high rate (the current record-holding boson sampling experiment is with five photons), but also that all photons survive the experiment with a probability greater than 50%. Considering that each photon would require to be injected in to the circuit, pass through over 1000 beamsplitters and be coupled out of the circuit to single photon detectors, this would amount to a significant engineering breakthrough which is unlikely to be realised in the near future.
Our results not only provide a benchmark for quantum supremacy by boson sampling, but also highlight how significant the experimental improvement must be in order to achieve this.

Link:
https://kwz.me/hB1

References:
[1] A. Harrow, A Montanaro: “Quantum computational supremacy”, Nature 549 (2017), 203–209.
[2] S. Aaronson, A. Arkhipov: “The computational complexity of linear optics”, Theory Comput. 9, 143–252, 2013.
[3] A. Neville, C. Sparrow, et al.: “Classical boson sampling algorithms with superior performance to near-term experiments”, Nature Physics 13, 1153–1157 (2017).

Please contact:
Alex Neville, Chris Sparrow
University of Bristol, UK
This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 112 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed