* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Another version - Scott Aaronson
Inverse problem wikipedia , lookup
Relativistic quantum mechanics wikipedia , lookup
Computer simulation wikipedia , lookup
Computational chemistry wikipedia , lookup
Computational electromagnetics wikipedia , lookup
Path integral formulation wikipedia , lookup
Natural computing wikipedia , lookup
Post-quantum cryptography wikipedia , lookup
Lateral computing wikipedia , lookup
Uncertainty principle wikipedia , lookup
Quantum machine learning wikipedia , lookup
Density matrix wikipedia , lookup
Computational complexity theory wikipedia , lookup
Factorization of polynomials over finite fields wikipedia , lookup
Quantum computing wikipedia , lookup
Theoretical computer science wikipedia , lookup
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov Computer Scientist / Physicist Nonaggression Pact You tolerate these complexity classes: P NP BPP BQP #P PH And I don’t inflict these on you: AM AWPP BQP/qpoly MA P/poly PSPACE QCMA QIP QMA SZK YQP In 1994, something big happened in the foundations of computer science, whose meaning is still debated today… Why exactly was Shor’s algorithm important? Boosters: Because it means we’ll build QCs! Skeptics: Because it means we won’t build QCs! Me: Even for reasons having nothing to do with building QCs! Shor’s algorithm was a hardness result for one of the central computational problems of modern science: QUANTUM SIMULATION Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik) Shor’s Theorem: QUANTUM SIMULATION is not in probabilistic polynomial time, unless FACTORING is also Today: A new kind of hardness result for simulating quantum mechanics Advantages: Disadvantages: Based on a more “generic” complexity assumption than the hardness of FACTORING Applies to relational problems (problems with many possible valid outputs) or sampling problems, not to decision problems Gives evidence that QCs have capabilities outside the entire polynomial hierarchy Harder to convince a skeptic that your QC is indeed Only involves linear optics! solving the relevant hard (With single-photon Fock state inputs, and nonadaptive multimode photon- problem detection measurements) Less relevant for the NSA Before We Go Further, A Bestiary of Complexity Classes… COUNTING P#P PERMANENT BQP How complexity theorists PHsay “such-and-such is damn Xunlikely”: YZ… P FACTORING “If such-and-such is true,NP then PH BPP collapses to a finite level” 3SAT Our Results Suppose the output distribution of any linear-optics circuit can be efficiently sampled classically (e.g., byaMonte If the PGC is true, then even noisyCarlo). #P=BPPNP). Then thelinear-optics polynomial hierarchy collapses (indeed P experiment can sample Indeed,from even ifasuch a distribution can be sampled by a classical probability distribution that computer with an oracle for the polynomial hierarchy, still the no classical computer can feasibly polynomial hierarchy collapses. sample from, unless the polynomial Suppose the outputhierarchy distribution of any linear-optics circuit collapses can even be approximately sampled efficiently classically. Then in BPPNP, one can nontrivially approximate the permanent of a matrix of independent N(0,1) Gaussian entries (with high probability over the choice of matrix). “Permanent-of-Gaussians Conjecture” (PGC): The above problem is #P-complete (i.e., as hard as worst-case PERMANENT) Related Work Knill, Laflamme, Milburn 2001: Linear optics with adaptive measurements yields universal QC Valiant 2002, Terhal-DiVincenzo 2002: Noninteracting fermions can be simulated in P A. 2004: Quantum computers with postselection on unlikely measurement outcomes can solve hard counting problems (PostBQP=PP) Shepherd, Bremner 2009: “Instantaneous quantum computing” can solve sampling problems that might be hard classically Bremner, Jozsa, Shepherd 2010: Efficient simulation of instantaneous quantum computing would collapse PH Particle Physics In One Slide There are two basic types of particle in the universe… All I can say is, the bosons got the harder job BOSONS FERMIONS Their transition amplitudes are given respectively by… Per A n a S n i 1 i, i Det A 1 sgn S n n a i, i 1 i Linear Optics for Dummies (or computer scientists) Computational basis states have the form |S=|s1,…,sm, where s1,…,sm are nonnegative integers such that s1+…+sm=n n = # of identical photons m = # of modes For us, m>n Starting from a fixed initial state—say, |I=|1,…,1,0,…0— you get to choose any mm mode-mixing unitary U U induces an m n 1 m n 1 n n states, defined by S U T unitary (U) on n-photon Per U S ,T s1! sm !t1!t m ! Here US,T is an nn matrix obtained by taking si copies of the ith row of U and tj copies of the jth column, for all i,j Then you get to measure (U)|I in the computational basis Upper Bounds on the Power of Linear Optics Theorem (Feynman 1982, Abrams-Lloyd 1996): Linear-optics computation can be simulated in BQP Proof Idea: Decompose the mm unitary U into a product of O(m2) elementary “linear-optics gates” (beamsplitters and phaseshifters), then simulate each gate using polylog(n) standard qubit gates Theorem (Bartlett-Sanders et al.): If the inputs are Gaussian states and the measurements are homodyne, then linearoptics computation can be simulated in P Theorem (Gurvits): There exist classical algorithms to approximate S|(U)|T to additive error in randomized poly(n,1/) time, and to compute the marginal distribution on photon numbers in k modes in nO(k) time By contrast, exactly sampling the distribution over all n photons is extremely hard! Here’s why … Given any matrix ACnn, we can construct an mm modemixing unitary U (where m2n) as follows: A B U C D Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply (U), and measure. Then the probability of observing |I again is p : I U I 2 2n Per A 2 Claim 1: p is #P-complete to estimate (up to a constant factor) Idea: Valiant proved that the PERMANENT is #P-complete. Can use known (classical) reductions to go from a multiplicative approximation of |Per(A)|2 to Per(A) itself. Claim 2: Suppose we had a fast classical algorithm for linear-optics sampling. Then we could estimate p in BPPNP Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate Pr M r outputs I 2 2 2n : I U I a fast classical Per A algorithm Conclusion:pSuppose we had r for linear-optics sampling. Then P#P=BPPNP. High-Level Idea Estimating a sum of exponentially many positive or negative numbers: #P-complete Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in BPPNP PH If quantum mechanics could be efficiently simulated classically, then these two problems would become equivalent—thereby placing #P in PH, and collapsing PH So why aren’t we done? Because real quantum experiments are subject to noise Would an efficient classical algorithm that sampled from a noisy distribution still collapse the polynomial hierarchy? Main Result: Take a system of n identical photons with m=O(n2) modes. Put each photon in a known mode, then apply a Haar-random mm unitary transformation U: U Permanent-of-Gaussians Conjecture (PGC): This problem is #P-complete Let D be the distribution that results from measuring the photons. Suppose there’s a fast classical algorithm that takes U as input, and samples any distribution even 1/poly(n)-close to D in variation distance. Then in BPPNP, one can estimate the permanent of a matrix A of i.i.d. N(0,1) complex Gaussians, to additive error n! with high probability , over A. O 1 n PGC Hardness of Linear-Optics Sampling Idea: Given a Gaussian random matrix A, we’ll “smuggle” A into the unitary transition matrix U for m=O(n2) photons—in such a way that S|(U)|I=Per(A), for some basis state |S Useful fact we rely on: given a Haar-random mm unitary matrix, an nn submatrix looks approximately Gaussian Then the classical sampler has “no way of knowing” which submatrix of U we care about—so even if it has 1/poly(n) error, with high probability it will return |S with probability |Per(A)|2 Then, just like before, we can use approximate counting to estimate Pr[|S]|Per(A)|2 in BPPNP, and thereby solve a #P-complete problem Problem: Bosons like to pile on top of each other! Call a basis state S=(s1,…,sm) good if every si is 0 or 1 (i.e., there are no collisions between photons), and bad otherwise If bad basis states dominated, then our sampling algorithm might “work,” without ever having to solve a hard PERMANENT instance Furthermore, the “bosonic birthday paradox” is even worse than the classical one! 2 Prboth particles land in the same box , 3 rather than ½ as with classical particles Fortunately, we show that with n bosons and mkn2 modes, the probability of a collision is still at most (say) ½ Experimental Prospects What would it take to implement the requisite experiment? • Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes • Reliable single-photon sources Fock states, not coherent states • Photodetector arrays that can reliably distinguish 0 vs. 1 photon But crucially, no nonlinear optics or postselected measurements! Our Proposal: Concentrate on (say) n30 photons and m1000 modes, so that classical simulation is difficult but not impossible Open Problems What are the exact resource requirements? E.g., can our experiment be done using a log(n)-depth linear-optics circuit? Prove the Permanent of Gaussians Conjecture! Would imply that even approximate classical simulation of linear-optics circuits would collapse PH 140Fr Are there other quantum systems for which approximate classical simulation would collapse PH? Do a linear-optics experiment that solves a classically-intractable sampling problem! ?