Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Entropy in the
Quantum World
Panagiotis Aleiferis
EECS 598, Fall 2001
Outline






Entropy in the classic world
Theoretical background
– Density matrix
– Properties of the density matrix
– The reduced density matrix
Shannon’s entropy
Entropy in the quantum world
– Definition and basic properties
– Some useful theorems
Applications
–
Entropy as a measure of entanglement
References
Entropy in the classic world

Murphy’s Laws
Why does heat
always flow
from warm to
1st law of thermodynamics:
cold?
ΔQ
ΔW
ΔU
2nd law of thermodynamics:
“There is some degradation of
the total energy U in the system,
some non-useful heat, in any
thermodynamic process.”
Rudolf Clausius (1822 - 1988)
The more disordered
the energy, the less
useful it can be!
“When energy is degraded, the
atoms become more disordered,
the entropy increases!”
S  k log W
“At equilibrium, the system will
be in its most probable state and
the entropy will be maximum.”
Ludwig Boltzmann (1844 - 1906)
All possible microstates of 4 coins
Four
heads
Three heads,
one tails
Two heads,
two tails
One heads,
three tails
Four
tails
W 1
W 4
W 6
W 4
W 1
Boltzmann statistics – 5 dipoles in external field




E  0 if   , E  U if   

E
g  1, E   0
 0 
P  exp 

 kT 
g  5, E 4,1  U
 U 
P4,1  5  exp 

k
T


g  10, E3,2  2U
 2U 
P3, 2  10  exp 

k
T


g  10, E 2,3  3U
 3U 
P2,3  10  exp 

kT


g  5, E1,4  4U
 4U 
P1, 4  5  exp 

k
T


g  1, E   5U
 5U 
P  5  exp 

k
T


General Relations of Boltzmann statistics
– For a system in equilibrium at temperature T:
 En 
g n exp  

kT 

Pn 
 Ei 
i gi exp   kT 
– Statistical entropy:
S  k  Pi ln Pi
i
Theoretical Background

The density matrix ρ
–
–
In most cases we do NOT completely know the exact
state of the system. We can estimate the probabilities Pi
that the system is in the states |ψi>.
Our system is in an “ensemble” of pure states {Pi,|ψi>}.
Define:
 a1i 
a 
2i  *

   Pi  i  i   Pi
a1i
  

i
 
 ani 

  Pi a1i 2
 i
 P a a*

i 2 i 1i

 i



 Pi ani a1*i
 i
*
P
a
a
 i 1i 2i
i
P a
i
2
2i
i

*
P
a
a
 i ni 2i
i
a2*i  ani*

* 
P
a
a
i i 1i ni 
* 
  Pi a2i ani 
i



2
  Pi ani 

i

tr(ρ)=1

Properties of the density matrix
– tr(ρ)=1
– ρ is a positive operator
(positive, means v  v is real, non-negative,v )
– if a unitary operator U is applied, the density matrix
transforms as:
 t2   U t1 U 
–
ρ corresponds to a pure state, if and only if:
tr (  2 )  1
–
ρ corresponds to a mixed state, if and only if:
tr(  2 )  1
–
if we choose the energy eigenfunctions for our basis set,
then H and ρ are both diagonal, i.e.
Hˆ mn  En mn ,  mn   n mn
–
in any other representation ρ may or may not be
diagonal, but generally it will be symmetric, i.e.
 mn   nm
Detailed balance is essential so that equilibrium is
maintained (i.e. probabilities do NOT explicitly depend
on time).

The reduced density matrix
– What happens if we want to describe a subsystem of
the composite system?
– Divide our system AB into parts A, B.
– Reduced density matrix for the subsystem A:
 A  trB (  AB )
where trB: “partial trace over subsystem B”
trB ( a1 a2  b1 b2 )  a1 a2  tr( b1 b2 )
trace over subspace of system B
Shannon’s entropy

Definition
– How much information we gain, on average, when we
learn the value of a random variable X?
OR equivalently,
What is the uncertainty, on average, about X before we
learn its value?
– If {p1, p2, …,pn} the probability distribution of the n
possible values of X:
H  X   H  p1 , p2 ,, pn    pi log 2 pi (bits)
i
–
By definition: 0log20 = 0
(events with zero probability do not contribute to entropy.)
–
Entropy H(X) depends only on the respective probabilities
of the individual events Xi !
–
Why is the entropy defined this way?
It gives the minimal physical resources required to store
information so that at a later time the information can be
reconstructed.
- “Shannon’s noiseless coding theorem”.
–
Example of Shannon’s noiseless coding theorem
Code 4 symbols {1, 2, 3, 4} with probabilities {1/2, 1/4, 1/8, 1/8}.
Code without compression:
1,2,3,4 without
compr.

 00,01,10,11
But, what happens if we use this code instead?
compr.
1,2,3,4 with

 0,10,110,111
Average string length for the second code:
1 1
1
1
7
lenght  1  2  3  3   2
2 4
8
8
4
1
1 1
1 2
1 7
1 1 1 1
Note: H  , , ,    log 2  log 2  log 2  !!!
2
2 4
4 8
8 4
 2 4 8 8

Joint and Conditional Entropy
– A pair (X,Y) of random variables.
– Joint entropy of X and Y:
H  X , Y    p( x, y ) log 2 p( x, y )
x, y
–
Entropy of X conditional on knowing Y:
H  X | Y   H  X , Y   H Y 

Mutual Information
– How much do X, Y have in common?
– Mutual information of X and Y:
H  X : Y   H  X   H Y   H  X , Y 
H(X)
H(X|Y)
–
–
H(Y)
H(Y:X)
H(Y|X)
H  X   H  X , Y  , equality when Y= f(X)
Subadditivity: H  X , Y   H  X   H Y  ,
equality when X, Y are independent variables.
Entropy in the quantum world

Von Neumann’s entropy
–
Probability distributions replaced by the density matrix ρ.
Von Neumann’s definition:
S    tr log 2  
–
If λi are the eigenvalues of ρ, use the equivalent definition:
S     i log 2 i
i

Basic properties of Von Neumann’s entropy
–
–
S    0 , equality if and only if in “pure state”.
In a d-dimensional Hilbert space: S    log 2 d ,
the equality if and only if in a completely mixed state, i.e.
0  0 
1 / d
 0 1/ d  0 
I 

 

 
d  


0  1/ d 
 0
–
If system AB in a “pure state”, then: S  A  S B
–
Triangle inequality and subadditivity:
S  A, B   S  A  S B 
S  A, B   S  A  S B 


 


 
S  A  tr  A log 2  A ,  A  trB  AB
with
S B   tr  B log 2  B ,  B  trA  AB
S  A, B   S  A  S B    AB   A   B
Both these inequalities hold for Shannon’s entropy H.
–
Strong subadditivity
S  A  S B  S  A, C   S B, C 
S  A, B, C   S B  S  A, B  S B, C 
First inequality also holds for Shannon’s entropy H, since:
H  A  H  A, C  , H B  H B, C 
BUT, for Von Neumann’s entropy it is possible that:
S  A  S  A, C  or S B  S B, C 
However, somehow nature “conspires” so that both of
these inequalities are NOT true simultaneously!
Applications

Entropy as a measure of entanglement
– Entropy is a measure of the uncertainty about a quantum
system before we make a measurement of its state.
– For a d-dimensional Hilbert space:
0  S    log 2 d
Pure state
Completely mixed state
–
Example:
Consider two 4-qbit systems with initial states:
1 
2 
0000  1111
2
0011  0101  0110  1001  1010  1100
6
Which one is more entangled ?
–
Partial measurement randomizes the initially pure
states.
–
The entropy of the resulting mixed states measures the
amount of this randomization!
–
The larger the entropy, the more randomized the state
after the measurement is, the more entangled the initial
state was!
–
We have to go through evaluating the density matrix of
the randomized states:
      Pi  i  i
i
–
1 
System 1:

0000  1111
2
1
1
0000 0000  0000 1111 
2
2
1
1
 1111 0000  1111 1111  S    0
2
2
Trace over (any) 1 qbit:
Pure state
1
1
 3  000 000  111 111  S   3   1
2
2
Trace over (any) 2 qbits:
1 2
 0
2  
 0

 0
0
0
0
0
λ1,2=0 , λ3,4=1/2
0 0 
0 0 
 S  2   1
0 0 

0 1 2
Trace over (any) 3 qbits:
λ1,2=1/2
1 2 0 
1  
 S 1   1

 0 1 2
Summary:
1. initially
2. measure (any) 1 qbit
3. measure (any) 2 qbits
4. measure (any) 3 qbits
S 0
S 1
–
System 2:
2 
0011  0101  0110  1001  1010  1100
6
 0011  0101  0110  1001  1010  1100 

  

6


 0011  0101  0110  1001  1010  1100 




6


Trace over (any) 1 qbit:
1  011  101  110  011  101  110 


 3  



2
3
3


1  001  010  100  001  010  100 


 



2
3
3


S  3   1
diagonal
Trace over (any) 2 qbits:
0
0
1 6 0
 0 13 13 0 

2  
 0 13 13 0 


0
0
0
1
6


λ1=0, λ2,3=1/6, λ4=2/3
1
1 1
1 2
2
S   2    log 2  log 2  log 2  1.252
6
6 6
6 3
3
Trace over (any) 3 qbits:
λ1,2=1/2
1 2 0 
1  
 S 1   1

 0 1 2
Summary:
1. initially
2. measure (any) 1 qbit
3. measure (any) 2 qbits
4. measure (any) 3 qbits
S 0
S 1
S  1.252
S 1
Therefore, ψ2 is more entangled than ψ1.
“Ludwin Boltzmann, who spent much of his life studying statistical
mechanics, died in 1906, by his own hand. Paul Ehrenfest,
carrying on the work, died similarly in 1933. Now it is our turn to
study statistical mechanics.”
- “States of Matter”, D. Goodstein
References


“Quantum Computation and Quantum Information”,
Nielsen & Chuang, Cambridge Univ. Press, 2000
“Quantum Mechanics”,
Eugen Merzbacher, Wiley, 1998

Lecture notes by C. Monroe (PHYS 644, Univ. of Michigan)
coursetools.ummu.umich.edu/2001/fall/physics/644/001.nsf

Lecture notes by J. Preskill (PHYS 219, Caltech)
www.theory.caltech.edu/people/preskill/ph229
Related documents