Download Lecture 3: Quantum simulation algorithms

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tight binding wikipedia , lookup

Particle in a box wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Bell's theorem wikipedia , lookup

Probability amplitude wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Quantum field theory wikipedia , lookup

Quantum dot wikipedia , lookup

Scalar field theory wikipedia , lookup

Perturbation theory (quantum mechanics) wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

Quantum entanglement wikipedia , lookup

Hydrogen atom wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Quantum decoherence wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

Quantum fiction wikipedia , lookup

Path integral formulation wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

Coherent states wikipedia , lookup

Quantum computing wikipedia , lookup

EPR paradox wikipedia , lookup

Max Born wikipedia , lookup

Quantum teleportation wikipedia , lookup

History of quantum field theory wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Dirac bracket wikipedia , lookup

Density matrix wikipedia , lookup

Quantum key distribution wikipedia , lookup

Quantum machine learning wikipedia , lookup

Hidden variable theory wikipedia , lookup

Molecular Hamiltonian wikipedia , lookup

Quantum state wikipedia , lookup

T-symmetry wikipedia , lookup

Quantum group wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Canonical quantum gravity wikipedia , lookup

Canonical quantization wikipedia , lookup

Transcript
Lecture 3: Quantum
simulation algorithms
Dominic Berry
Macquarie University
1996
Simulation of Hamiltonians

We want to simulate the evolution
𝜓𝑡 = 𝑒 −𝑖𝐻𝑡 |𝜓0 〉

The Hamiltonian is a sum of terms:
𝑀
𝐻=
𝐻ℓ
ℓ=1
Seth Lloyd

We can perform
𝑒 −𝑖𝐻ℓ 𝑡

For short times we can use
𝑒 −𝑖𝐻1𝛿𝑡 𝑒 −𝑖𝐻2 𝛿𝑡 … 𝑒 −𝑖𝐻𝑀−1𝛿𝑡 𝑒 −𝑖𝐻𝑀 𝛿𝑡 ≈ 𝑒 −𝑖𝐻𝛿𝑡

For long times
𝑒 −𝑖𝐻1 𝑡/𝑟 𝑒 −𝑖𝐻2 𝑡/𝑟 … 𝑒 −𝑖𝐻𝑀 𝑡/𝑟
𝑟
≈ 𝑒 −𝑖𝐻𝑡
Simulation of Hamiltonians

1996
For short times we can use
𝑒 −𝑖𝐻1𝛿𝑡 𝑒 −𝑖𝐻2 𝛿𝑡 … 𝑒 −𝑖𝐻𝑀−1𝛿𝑡 𝑒 −𝑖𝐻𝑀 𝛿𝑡 ≈ 𝑒 −𝑖𝐻𝛿𝑡




This approximation is because
𝑒 −𝑖𝐻1𝛿𝑡 𝑒 −𝑖𝐻2 𝛿𝑡 … 𝑒 −𝑖𝐻𝑀−1𝛿𝑡 𝑒 −𝑖𝐻𝑀 𝛿𝑡
= 𝕀 − 𝑖𝐻1 𝛿𝑡 + 𝑂 𝛿𝑡 2 𝕀 − 𝑖𝐻2 𝛿𝑡 + 𝑂 𝛿𝑡 2 …
… 𝕀 − 𝑖𝐻𝑀 𝛿𝑡 + 𝑂 𝛿𝑡 2
Seth Lloyd
= 𝕀 − 𝑖𝐻1 𝛿𝑡 − 𝑖𝐻2 𝛿𝑡 … − 𝑖𝐻𝑀 𝛿𝑡 + 𝑂 𝛿𝑡 2
= 𝕀 − 𝑖𝐻𝛿𝑡 + 𝑂 𝛿𝑡 2
= 𝑒 −𝑖𝐻𝛿𝑡 + 𝑂(𝛿𝑡 2 )
If we divide long time 𝑡 into 𝑟 intervals, then
𝑟
𝑟
𝑒 −𝑖𝐻𝑡 = 𝑒 −𝑖𝐻𝑡/𝑟 = 𝑒 −𝑖𝐻1 𝑡/𝑟 𝑒 −𝑖𝐻2𝑡/𝑟 … 𝑒 −𝑖𝐻𝑀 𝑡/𝑟 + 𝑂 𝑡/𝑟 2
𝑟
−𝑖𝐻
𝑡/𝑟
−𝑖𝐻
𝑡/𝑟
−𝑖𝐻
𝑡/𝑟
1
2
𝑀
= 𝑒
𝑒
…𝑒
+ 𝑂 𝑡 2 /𝑟
Typically, we want to simulate a system with some maximum
allowable error 𝜀.
Then we need
𝑟 ∝ 𝑡 2 /𝜀.
2007
Higher-order simulation

Berry, Ahokas,
Cleve, Sanders
A higher-order decomposition is
𝑒 −𝑖𝐻1 𝛿𝑡/2 … 𝑒 −𝑖𝐻𝑀−1𝛿𝑡/2 𝑒 −𝑖𝐻𝑀 𝛿𝑡/2 𝑒 −𝑖𝐻𝑀 𝛿𝑡/2 𝑒 −𝑖𝐻𝑀−1𝛿𝑡/2 … 𝑒 −𝑖𝐻1𝛿𝑡/2
= 𝑒 −𝑖𝐻𝛿𝑡 + 𝑂 𝑀 𝐻 𝛿𝑡 3

If we divide long time 𝑡 into 𝑟 intervals, then
𝑟
𝑒 −𝑖𝐻𝑡 = 𝑒 −𝑖𝐻𝑡/𝑟
𝑟
= 𝑒 −𝑖𝐻1 𝑡/2𝑟 … 𝑒 −𝑖𝐻𝑀 𝑡/2𝑟 𝑒 −𝑖𝐻𝑀 𝑡/2𝑟 … 𝑒 −𝑖𝐻1𝑡/2𝑟 + 𝑂 𝑀 𝐻 𝑡/𝑟 3
𝑟
= 𝑒 −𝑖𝐻1 𝑡/2𝑟 … 𝑒 −𝑖𝐻𝑀 𝑡/2𝑟 𝑒 −𝑖𝐻𝑀 𝑡/2𝑟 … 𝑒 −𝑖𝐻𝐾1𝑡/2𝑟 + 𝑂 𝑀 𝐻 𝑡 3 /𝑟 2

Then we need

General product formula can give error 𝑂 𝑀 𝐻 𝑡/𝑟

For time 𝑡 the error is 𝑂 𝑀 𝐻 𝑡

To bound the error as 𝜀 the value of 𝑟 scales as
𝑀 𝐻 𝑡 1+1/2𝑘
𝑟∝
𝜀 1/2𝑘
The complexity is 𝑟 × 𝑀.

1.5 /
𝑟∝ 𝑀 𝐻 𝑡
2𝑘+1 /𝑟 2𝑘
.
𝜀.
2𝑘+1
for time 𝑡/𝑟.
Higher-order simulation
2007
Berry, Ahokas,
Cleve, Sanders
𝑀 𝐻 𝑡 1+1/2𝑘
𝑟∝
𝜀 1/2𝑘





The complexity is 𝑟 × 𝑀.
For Sukuki product formulae, we have an additional factor in 𝑟
5𝑘 𝑀 𝐻 𝑡 1+1/2𝑘
𝑟∝
𝜀 1/2𝑘
The complexity then needs to be multiplied by a further factor of 5𝑘 .
The overall complexity scales as
𝑀52𝑘 𝑀 𝐻 𝑡 1+1/2𝑘
𝜀 1/2𝑘
We can also take an optimal value of 𝑘 ∝ log 𝑀 𝐻 𝑡/𝜀 , which gives
scaling
𝑀2 𝐻 𝑡 exp[2 ln 5 ln(𝑀 𝐻 𝑡/𝜀)]
Solving linear systems

Consider a large system of linear equations:
𝐴𝒙 = 𝒚

First assume that the matrix 𝐴 is Hermitian.

It is possible to simulate Hamiltonian evolution
under 𝐴 for time 𝑡: 𝑒 −𝑖𝐴𝑡 .

Encode the initial state in the form
2009
𝑁
𝒚 =
𝑦ℓ |ℓ〉
ℓ=1

The state can also be written in terms of the eigenvectors of 𝐴 as
𝑁
𝒚 =
𝜓𝑘 |𝜆𝑘 〉
𝑘=1

We can obtain the solution |𝒙〉 if we can divide each 𝜓𝑘 by 𝜆𝑘 .

Use the phase estimation technique to place the estimate of 𝜆𝑘 in
an ancillary register to obtain
𝑁
𝜓𝑘 |𝜆𝑘 〉|𝜆𝑘 〉
𝑘=1
Harrow, Hassidim
& Lloyd
Solving linear systems

2009
Use the phase estimation technique to place the
estimate of 𝜆𝑘 in an ancillary register to obtain
𝑁
𝜓𝑘 |𝜆𝑘 〉|𝜆𝑘 〉
𝑘=1

Append an ancilla and rotate it according to the
value of 𝜆𝑘 to obtain
𝑁
𝜓𝑘 |𝜆𝑘 〉|𝜆𝑘 〉
𝑘=1

Invert the phase estimation technique to remove the estimate of
𝜆𝑘 from the ancillary register, giving
𝑁
𝑘=1

1
1
0 + 1 − 2 |1〉
𝜆𝑘
𝜆𝑘
1
1
𝜓𝑘 |𝜆𝑘 〉
0 + 1 − 2 |1〉
𝜆𝑘
𝜆𝑘
Use amplitude amplification to amplify the |0〉 component on the
ancilla, giving a state proportional to
𝑁
𝒙 ∝
𝑘=1
𝜓𝑘
|𝜆 〉 =
𝜆𝑘 𝑘
𝑁
𝑥ℓ |ℓ〉
ℓ=1
Harrow, Hassidim
& Lloyd
Solving linear systems

What about non-Hermitian 𝐴?

Construct a blockwise matrix
0 𝐴
𝐴′ = †
𝐴
0

The inverse of 𝐴′ is then
𝐴′ −1 = 0
𝐴−1


This means that
0
𝐴−1
−1
𝐴†
0
𝐴†
0
−1
𝒚
0
=
0
𝒙
In terms of the state
|0〉|𝒚〉 ↦ |1〉|𝒙〉
2009
Harrow, Hassidim
& Lloyd
Solving linear systems
2009
Complexity Analysis

We need to examine:
1.
The complexity of simulating the Hamiltonian to
estimate the phase.
2.
The accuracy needed for the phase estimate.
3.
The possibility of 1/𝜆𝑘 being greater than 1.

The complexity of simulating the Hamiltonian for
time 𝑡 is approximately ∝ 𝐴 𝑡 = |𝜆max |𝑡.

To obtain accuracy 𝛿 in the estimate of 𝜆, the
Hamiltonian needs to be simulated for time ∝ 1/𝛿.

We actually need to multiply the state coefficients
by 𝜆min /𝜆𝑘 , to give
𝑁
𝑘=1

|𝜆min |
𝜓𝑘 |𝜆𝑘 〉
𝜆𝑘
To obtain accuracy 𝜀 in 𝜆min /𝜆𝑘 , we need
accuracy 𝜀𝜆2𝑘 / 𝜆min in the estimate of 𝜆.
Harrow, Hassidim
& Lloyd
Final complexity is
𝜅2
|𝜆max |
∼ ,
𝜅: =
𝜀
|𝜆min |
2010
Differential equations


Berry
Discretise the differential equation, then encode as a linear system.
Simplest discretisation: Euler method.
dx
 Ax  b
dt
sets initial condition

I
0
0

 ( I  Ah)
I
0

0
( I  Ah) I


0
0
I


0
0
0
sets x to be constant
0
0
0
I
I
x j 1  x j
h
 Ax j  b
0   x 0   xin 
0   x1   bh 
   
0   x 2    bh 
   
0   x3   0 
I   x 4   0 
Quantum walks



The quantum walk has position
and coin values
|𝑥, 𝑐〉
It then alternates coin and step
operators, e.g.
𝐶 𝑥, ±1 = 𝑥, −1 ± 𝑥, 1 / 2
𝑆 𝑥, 𝑐 = |𝑥 + 𝑐, 𝑐〉
The position can progress
linearly in the number of steps.

A classical walk has a position which is an
integer, 𝑥, which jumps either to the left or the
right at each step.

The resulting distribution is a binomial
distribution, or a normal distribution as the
limit.
Quantum walk on a graph

The walk position is any node on
the graph.

Describe the generator matrix 𝐾 by
𝛾,
𝑎 ≠ 𝑎′ , 𝑎𝑎′ ∈ 𝐺
0,
𝐾𝑎𝑎′ =
𝑎 ≠ 𝑎′ , 𝑎𝑎′ ∉ 𝐺
−𝑑 𝑎 𝛾,
𝑎 = 𝑎′

The quantity 𝑑(𝑎) is the number of
edges incident on vertex 𝑎.

An edge between 𝑎 and 𝑎 ′ is
denoted 𝑎𝑎′ .

The probability distribution for a
continuous walk has the differential
equation
𝑑𝑝𝑎 𝑡
=
𝐾𝑎𝑎′ 𝑝𝑎′ (𝑡)
𝑑𝑡
′
𝑎
1998
Quantum walk on a graph
𝑑𝑝𝑎 𝑡
=
𝑑𝑡

Farhi
𝐾𝑎𝑎′ 𝑝𝑎′ (𝑡)
𝑎′
Quantum mechanically we have
𝑑
𝑖
𝜓 𝑡 = 𝐻|𝜓 𝑡 〉
𝑑𝑡
𝑑
𝑖 〈𝑎 𝜓 𝑡 =
𝑎 𝐻 𝑎′ 〈𝑎′|𝜓 𝑡 〉
𝑑𝑡
′
𝑎

The natural quantum analogue is
𝑎 𝐻 𝑎′ = 𝐾𝑎𝑎′

We take
𝑎𝐻

𝑎′
𝛾,
= 0,
𝑎 ≠ 𝑎′ , 𝑎𝑎′ ∈ 𝐺
otherwise.
Probability is conserved because 𝐻
is Hermitian.
Quantum walk on a graph



entrance
Childs, Farhi,
Gutmann
The goal is to traverse the graph
from entrance to exit.
Classically the random walk will
take exponential time.
For the quantum walk, define a
superposition state
1
col 𝑗 =
|𝑎〉
𝑁𝑗 𝑎∈column 𝑗
exit
𝑁𝑗 =

2002
2𝑗
22𝑛+1−𝑗
0≤𝑗≤𝑛
𝑛 + 1 ≤ 𝑗 ≤ 2𝑛 + 1
On these states the matrix
elements of the Hamiltonian are
col 𝑗 𝐻 col 𝑗 ± 1 = 2𝛾
Quantum walk on a graph
entrance
2003
Childs, Cleve, Deotto,
Farhi, Gutmann,
Spielman

Add random connections
between the two trees.

All vertices (except entrance
and exit) have degree 3.

Again using column states, the
matrix elements of the
Hamiltonian are
exit
col 𝑗 𝐻 col 𝑗 ± 1
=
2𝛾
2𝛾
𝑗≠𝑛
𝑗=𝑛

This is a line with a defect.

There are reflections off the
defect, but the quantum walk
still reaches the exit efficiently.
2007
NAND tree quantum walk
Farhi, Goldstone,
Gutmann

In a game tree I alternate making moves with
an opponent.

In this example, if I move first then I can
always direct the ant to the sugar cube.

What is the complexity of doing this in
general? Do we need to query all the leaves?
AND
OR
AND
𝑥1
OR
AND
AND
𝑥2
𝑥3
𝑥4
𝑥5
AND
𝑥6
𝑥7
𝑥8
2007
NAND tree quantum walk
OR
OR
AND
𝑥1
NOT
AND
𝑥2
𝑥3
NOT
AND
𝑥4
𝑥1
NAND
NAND
𝑥1
Farhi, Goldstone,
Gutmann
NAND
𝑥2
𝑥3
𝑥4
NOT
NOT
AND
𝑥2
𝑥3
𝑥4
NAND tree quantum walk
2007
Farhi, Goldstone,
Gutmann
wave

The Hamiltonian is a sum of an oracle Hamiltonian, representing the
connections, and a fixed driving Hamiltonian, which is the remainder
of the tree.
𝐻 = 𝐻𝑂 + 𝐻𝐷

Prepare a travelling wave packet on the left.

If the answer to the NAND tree problem is 1, then after a fixed time
the wave packet will be found on the right.

The reflection depends on the solution of the NAND tree problem.
Simulating quantum walks

A more realistic scenario is that we have
an oracle that provides the structure of the
graph; i.e., a query to a node returns all
the nodes that are connected.

The quantum oracle is queried with a
node number 𝑥 and a neighbour number 𝑗.

It returns a result via the quantum
operation
𝑈𝑂 𝑥, 𝑗 |0〉 = 𝑥, 𝑗 |𝑦〉

wave
Here 𝑦 is the 𝑗’th neighbour of 𝑥.
|𝑥, 𝑗〉
|𝑥, 𝑗〉
𝑈𝑂
|0〉
connected nodes
query node
|𝑦〉
Decomposing the Hamiltonian




0
0
1
The rows and columns correspond to
0
node numbers.
𝐻=
0
The ones indicate connections
1
between nodes.
⋮
The oracle gives us the position of
0
In the matrix picture, we have a
sparse matrix.
the 𝑗’th nonzero element in column 𝑥.
0
1
0
0
0
1
⋮
0
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
0
0
1
0
0
0
⋮
1
Decomposing the Hamiltonian




0
0
1
The rows and columns correspond to
0
node numbers.
𝐻=
0
The ones indicate connections
1
between nodes.
⋮
The oracle gives us the position of
0
In the matrix picture, we have a
sparse matrix.
the 𝑗’th nonzero element in column 𝑥.

We want to be able to separate the
Hamiltonian into 1-sparse parts.

This is equivalent to a graph
colouring – the graph edges are
coloured such that each node has
unique colours.
0
1
0
0
0
1
⋮
0
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
0
0
1
0
0
0
⋮
1
2007
Graph colouring




0
0
First guess: for each node, assign
1
edges sequentially according to their
0
numbering.
𝐻=
0
This does not work because the edge
1
between nodes 𝑥 and 𝑦 may be edge
⋮
1 (for example) of 𝑥, but edge 2 of 𝑦.
0
How do we do this colouring?
Second guess: for edge between 𝑥
and 𝑦, colour it according to the pair
of numbers (𝑗𝑥 , 𝑗𝑦 ), where it is edge
𝑗𝑥 of node 𝑥 and edge 𝑗𝑦 of node 𝑦.

We decide the order such that 𝑥 < 𝑦.

It is still possible to have ambiguity:
say we have 𝑥 < 𝑦 < 𝑧.
Berry, Ahokas,
Cleve, Sanders
0
1
0
0
0
1
⋮
0
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
0
0
1
0
0
0
⋮
1
2007
Graph colouring




Berry, Ahokas,
Cleve, Sanders
0
0
First guess: for each node, assign
1
edges sequentially according to their
0
numbering.
𝐻=
0
This does not work because the edge
1
between nodes 𝑥 and 𝑦 may be edge
⋮
1 (for example) of 𝑥, but edge 2 of 𝑦.
0
How do we do this colouring?
Second guess: for edge between 𝑥
and 𝑦, colour it according to the pair
of numbers (𝑗𝑥 , 𝑗𝑦 ), where it is edge
𝑗𝑥 of node 𝑥 and edge 𝑗𝑦 of node 𝑦.

We decide the order such that 𝑥 < 𝑦.

It is still possible to have ambiguity:
say we have 𝑥 < 𝑦 < 𝑧.
0
1
0
0
0
1
⋮
0
𝑥
3
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
0
0
1
0
0
0
⋮
1
2
1
(1,2)
3
2
𝑦(1,2)
1
2
3

⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
𝑧
1
Use a string of nodes with equal
edge colours, and compress.
General Hamiltonian oracles
0
0
2
0
𝐻=
0
− 2𝑖
⋮
0


0
3
0
0
0
1/2
⋮
0
−
2
0
0
0
0
0
0
0
0
0
⋮
3−𝑖
0
1
0
𝑒 −𝑖𝜋/7
0
⋮
0
𝑒 𝑖𝜋/7
2
0
⋮
0
More generally, we can perform a
colouring on a graph with matrix
elements of arbitrary (Hermitian)
values.
|𝑥, 𝑗〉
Then we also require an oracle to
give us the values of the matrix
elements.
|𝑥, 𝑦〉
|0〉
|0〉
2𝑖 ⋯
1/2 ⋯
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
0
0
⋯ − 3+𝑖
⋯
0
⋯
0
⋯
0
⋱
⋮
⋯
1/10
|𝑥, 𝑗〉
𝑈𝑂
|𝑦〉
|𝑥, 𝑦〉
𝑈𝐻
|𝐻𝑥,𝑦 〉
Simulating 1-sparse case
0
0
0
0
𝐻=
0
− 2𝑖
⋮
0
0
3
0
0
0
0
⋮
0
0
0
0
0
0
0
⋮
− 3−𝑖
0
0
0
1
0
0
⋮
0
0
0
0
0
2
0
⋮
0
2𝑖
0
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
⋯
0
⋯
0
⋯ − 3+𝑖
⋯
0
⋯
0
⋯
0
⋱
⋮
⋯
0

Assume we have a 1-sparse matrix.

How can we simulate evolution under this Hamiltonian?

Two cases:
1.
If the element is on the diagonal, then we have a 1D subspace.
2.
If the element is off the diagonal, then we need a 2D subspace.
Simulating 1-sparse case

2003
Aharonov,
Ta-Shma
We are given a column number 𝑥. There are then 5 quantities
that we want to calculate:
1.
𝑏𝑥 : A bit registering whether the element is on or off the
diagonal; i.e. 𝑥 belongs to a 1D or 2D subspace.
2.
𝑚𝑖𝑛𝑥 : The minimum number out of the (1D or 2D) subspace to
which 𝑥 belongs.
3.
𝑚𝑎𝑥𝑥 : The maximum number out of the subspace to which 𝑥
belongs.
4.
𝐴𝑥 : The entries of 𝐻 in the subspace to which 𝑥 belongs.
5.
𝑈𝑥 : The evolution under 𝐻 for time 𝑡 in the subspace.

We have a unitary operation that maps
𝑥 0 → 𝑥 |𝑏𝑥 , 𝑚𝑖𝑛𝑥 , 𝑚𝑎𝑥𝑥 , 𝐴𝑥 , 𝑈𝑥 〉
Simulating 1-sparse case
2003
Aharonov,
Ta-Shma

We have a unitary operation that maps
𝑥 0 → 𝑥 |𝑏𝑥 , 𝑚𝑖𝑛𝑥 , 𝑚𝑎𝑥𝑥 , 𝐴𝑥 , 𝑈𝑥 〉

We consider a superposition of the two states in the subspace,
𝜓 = 𝜇 𝑚𝑖𝑛𝑥 + 𝜈 𝑚𝑎𝑥𝑥

Then we obtain
𝜓 |0〉 → |𝜓〉|𝑏𝑥 , 𝑚𝑖𝑛𝑥 , 𝑚𝑎𝑥𝑥 , 𝐴𝑥 , 𝑈𝑥 〉

A second operation implements the controlled operation based
on the stored approximation of the unitary operation 𝑈𝑥 :
|𝜓〉 𝑈𝑥 , 𝑚𝑖𝑛𝑥 , 𝑚𝑎𝑥𝑥 → 𝑈𝑥 |𝜓〉 𝑈𝑥 , 𝑚𝑖𝑛𝑥 , 𝑚𝑎𝑥𝑥

This gives us
𝑈𝑥 |𝜓〉|𝑏𝑥 , 𝑚𝑖𝑛𝑥 , 𝑚𝑎𝑥𝑥 , 𝐴𝑥 , 𝑈𝑥 〉

Inverting the first operation then yields
𝑈𝑥 𝜓 0
Applications

2007: Discrete query NAND algorithm – Childs, Cleve, Jordan, Yeung

2009: Solving linear systems – Harrow, Hassidim, Lloyd

2009: Implementing sparse unitaries – Jordan, Wocjan

2010: Solving linear differential equations – Berry

2013: Algorithm for scattering cross section – Clader, Jacobs, Sprouse
Implementing unitaries

2009
Jordan, Wocjan
Construct a Hamiltonian from unitary as
0
𝐻= †
𝑈
𝑈
0

Now simulate evolution under this Hamiltonian
𝑒 −𝑖𝐻𝑡 = 𝕀 cos 𝑡 − 𝑖𝐻 sin 𝑡

Simulating for time 𝑡 = 𝜋/2 gives
𝑒 −𝑖𝐻𝜋/2 1 𝜓 = −𝑖𝐻 1 𝜓
= −𝑖|0〉𝑈|𝜓〉
Quantum simulation via walks

Three ingredients:
1. A Szegedy quantum walk
2. Coherent phase estimation
3. Controlled state preparation

The quantum walk has eigenvalues and
eigenvectors related to those for Hamiltonian.
By using phase estimation, we can estimate the
eigenvalue, then implement that actually
needed.

Szegedy Quantum Walk

2004
Szegedy
The walk uses two reflections
2𝐶𝐶 † − 𝕀
2𝑅𝑅† − 𝕀


The first is controlled by the first register and acts on the
second register.
Given some matrix 𝑐[𝑖, 𝑗], the operator 𝐶 is defined by
𝑁
𝑐𝑖 =
𝑐[𝑖, 𝑗]|𝑗〉
𝑗=1
𝑁
𝐶=
𝑖 〈𝑖| ⊗ |𝑐𝑖 〉
𝑖=1
Szegedy Quantum Walk


2004
Szegedy
The diffusion operator 2𝑅𝑅† − 𝕀 is controlled by the
second register and acts on the first. Use a similar
definition with matrix 𝑟[𝑖, 𝑗].
Both are controlled reflections:
𝑁
2𝐶𝐶 † − 𝕀 =
𝑖 〈𝑖| ⊗ (2|𝑐𝑖 〉〈𝑐𝑖 | − 𝕀)
𝑖=1
𝑁
2𝑅𝑅† − 𝕀 =
(2|𝑟𝑖 〉〈𝑟𝑖 | − 𝕀) ⊗ 𝑖 〈𝑖|
𝑖=1

The eigenvalues and eigenvectors of the step of the
quantum walk
(2𝐶𝐶 † − 𝕀)(2𝑅𝑅† − 𝕀)
are related to those of a matrix formed from 𝑐[𝑖, 𝑗] and
𝑟[𝑖, 𝑗].
2012
Szegedy walk for simulation
Berry,
Childs

Use symmetric system, with
∗
𝑐 𝑖, 𝑗 = 𝑟 𝑖, 𝑗 = 𝐻𝑖𝑗

Then eigenvalues and eigenvectors are related to
those of Hamiltonian.

In reality we need to modify to “lazy” quantum walk,
with
𝑐𝑖 =

𝛿
𝐻
𝑁
1
∗
𝐻𝑖𝑗
|𝑗〉
𝑗=1
𝜎𝑖 𝛿
+ 1−
|𝑁 + 1〉
𝐻 1
𝑁
𝜎𝑖 ≔
Grover preparation gives
𝜓𝑏 =
1
𝑁
𝑁 𝑘=1
𝑘
𝜓𝑘 0 + 1 − 𝜓𝑘 2 |1〉
|𝐻𝑖𝑗 |
𝑗=1
Szegedy walk for simulation
2012
Berry,
Childs

Three step process:
1. Start with state in one of the subsystems, and perform controlled state
preparation.
2. Perform steps of quantum walk to approximate Hamiltonian evolution.
3. Invert controlled state preparation, so final state is in one of the
subsystems.

Step 2 can just be performed with small 𝛿 for lazy quantum walk, or can
use phase estimation.

A Hamiltonian has eigenvalues 𝜇, so evolution
under the Hamiltonian has eigenvalues
𝑒 −𝑖𝜇𝑇
𝑉 is the step of a quantum walk, and has
eigenvalues
𝑒 𝑖𝜆 = ±𝑒 ±𝑖 arcsin 𝜇𝛿

The complexity is the
maximum of
𝐻 𝑇
𝑑 𝐻 max 𝑇
𝜀