Download University of Toronto Strongly Repulsive Ultracold

Document related concepts

Quantum state wikipedia , lookup

Enrico Fermi wikipedia , lookup

Bell's theorem wikipedia , lookup

Hydrogen atom wikipedia , lookup

Spin (physics) wikipedia , lookup

Wave function wikipedia , lookup

Renormalization group wikipedia , lookup

Molecular Hamiltonian wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Atom wikipedia , lookup

Ising model wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Tight binding wikipedia , lookup

Population inversion wikipedia , lookup

Atomic theory wikipedia , lookup

Ferromagnetism wikipedia , lookup

Transcript
University of Toronto
Department of Physics
Undergraduate Thesis Report
Strongly Repulsive Ultracold
Potassium Atoms
Supervisor:
Author:
Boris Braverman
Prof. Joseph Thywissen
April 28, 2011
Abstract
Strongly interacting quantum systems present enormous challenges as well as opportunities in
the realms of both experimental and theoretical physics. Recent interest in these systems has been
spurred by condensed matter systems such as high-temeprature superconductors, where the difficulty in understanding the observed phenomena reduces to an inability to understand the behaviour
of a large number of strongly interacting degenerate fermions.
In our experiment, we implement the Stoner Hamiltonian a cloud of ultracold
40 K
atoms, with
repulsive interactions provided by a Feshbach resonance. Theoretical work has produced conflicting predictions regarding the behaviour of the system at large repulsive interactions between the
fermions, with some work suggesting that such a system should undergo a ferromagnetic transition,
while other work suggesting that ferromagnetism cannot be observed in the Stoner model.
Experimental results have been equally inconclusive. A recent experimental paper [1] concluded
that such a system exhibits a ferromagnetic transition, although the critical interaction strength
for the transition was significantly different from the results of theoretical work. A different experimental paper [2] concluded just the opposite, that ferromagnetism is “unlikely to be observed in
this system”, based on measurements of spin drag and spin diffusivity.
Our ultimate goal is then to resolve the confusion regarding this system, and in particular elucidate when, if ever, ferromagnetism can be observed. On the way to this goal, we will investigate the
behaviour of Fermi gases near Feshbach resonances, in particular the effect of molecular formation
on observed properties of the gas.
Acknowledgments
I would like to thank my lab mates, Alma Bardon and Nathan Cheng, for a fantastic and
fascinating year of learning about ultracold fermions. I would especially like to thank Alma for
sharing her wisdom about running an elaborate and delicate optical apparatus, and Nathan for
our late-night conversations about physics, mathematics, and philosophy. I would like to thank the
MP025 denizens, Dan Fine, Graham Edge, Dave McKay, and Dylan Jervis, for discussions, hilarity,
and putting up with my endless quests for equipment and distraction. Finally, I would like to thank
Joseph Thywissen for his guidance in the lab as well as in academic life in general.
It has been a pleasure working with you all.
1
Contents
Abstract
1
Table of contents
2
1 Introduction
5
1.1
Ferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Theoretical background
7
9
2.1
Stoner instability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.2
Ultracold atom scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.3
Modeling the experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.3.1
Extracting the susceptibility as a function of interaction strength . . . . . . .
17
2.3.2
Spin oscillations in the absence of ferromagnetism . . . . . . . . . . . . . . .
18
2.3.3
Thoughts about Fermi gas oscillations with spin textures . . . . . . . . . . .
18
3 Experimental possibilities
20
3.1
MIT experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
3.2
Stringari proposal
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
3.3
Spin textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
3.3.1
Playing with springs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
3.3.2
Time of flight signatures of ferromagnetism . . . . . . . . . . . . . . . . . . .
23
3.4
Molecular reaction rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.5
Other signatures of ferromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.5.1
Protected coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.5.2
Calorimetric signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3.5.3
Spin imbalanced gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
2
3.5.4
Local magnetization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Sub-projects
4.1
4.2
26
27
Microwave evaporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
4.1.1
Overview of the Anritsu MG37022A . . . . . . . . . . . . . . . . . . . . . . .
28
4.1.2
Programming the Anritsu MG37022A . . . . . . . . . . . . . . . . . . . . . .
28
4.1.3
Sample code
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
4.1.4
Microwave circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
4.1.5
Microwave calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
Image analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
References
38
A Data fitting with MATLAB
45
A.1 Functions and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
A.1.1 lscov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
A.1.2 lsqnonlin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
A.1.3 fminsearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
A.1.4 Tips and tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
A.2 Fermi cloud fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
A.2.1 Li2 2d physical.m
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
A.2.2 Li2 2d fit.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
A.2.3 Mathematica polylog pre-evaluation code . . . . . . . . . . . . . . . . . . . .
55
B Polylogarithms
56
C Fitting functions for classical gases
60
C.1 Harmonic potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
C.2 Quadrupole potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
C.3 Linear potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
D Fitting functions for fermion gases
66
Index
71
3
List of Figures
2.1
Energy of a magnetized gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.2
Local density approximation profiles for an interacting Fermi gas . . . . . . . . . . .
12
2.3
Equilibrium magnetization of a spin-mixture Fermi gas found using quantum Monte
Carlo simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.4
Key figures from [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.5
Radial atom density profile in a spherical harmonic trap . . . . . . . . . . . . . . . .
15
2.6
Volume differential with respect to density in a spherical harmonic trap . . . . . . .
16
2.7
Mean inverse susceptibility in a harmonic trap . . . . . . . . . . . . . . . . . . . . .
16
2.8
Mean inverse susceptibility in a harmonic trap for a quasiferromagnet . . . . . . . .
19
4.1
Microwave circuit diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
4.2
Calibration data for the microwave power diode . . . . . . . . . . . . . . . . . . . . .
33
4.3
Calibration curves for the microwave power diode . . . . . . . . . . . . . . . . . . . .
34
4.4
Calibration curves for microwave amplifier gain . . . . . . . . . . . . . . . . . . . . .
34
4.5
Microwave reflection and transmission properties of the chip . . . . . . . . . . . . . .
35
4.6
Raw time-of-flight images of
40 K
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
4.7
Ratio time-of-flight image of
40 K
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.8
Illustration of time of flight imaging . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.9
Sample Fermi cloud fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
4.10 Fermi fitting residuals histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.11 Fugacity-temperature conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
4.12 Artificial images of fermion clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
4.13 Errors in fitting Fermi clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4.14 Results of fitting experimental Fermi clouds . . . . . . . . . . . . . . . . . . . . . . .
42
4
B.1 Log-log plots of polylogarithm functions . . . . . . . . . . . . . . . . . . . . . . . . .
57
D.1 Ideal Fermi clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
5
List of Tables
4.1
List of Anritsu MG37022A commands. . . . . . . . . . . . . . . . . . . . . . . . . . .
6
29
Chapter 1
Introduction
Strongly correlated quantum systems are incredibly difficult to model and understand, with the
difficulty being largely due to the enormous dimension of the vector space in which the problem
is expressed. While classical systems of N particles can be described as a 6N dimensional phase
space, and uncorrelated quantum systems only require a 3 dimensional continuous Hilbert space
that describes each particle (of dimension ℵ31 ), the full description of the quantum problem that
is necessitated by the strong interactions and correlations requires a 3N dimensional continuous
Hilbert space (of dimension ℵ3N
1 ). In essence, while in a non-interacting quantum system, one can
solve Schrödinger’s equation for each particle independently and then impose quantum statistics
through (anti-)symmetrization of the wavefunction, this is no longer a viable possibility in a strongly
correlated quantum system.
Many theoretical techniques have been developed and employed to solve quantum problems
involving strong interactions, including quantum Monte Carlo methods, second and higher order
perturbation theory, and Fermi liquid theory. The profusion of techniques and the uncertainty to
the domain of applicability of each technique are a testament to the difficulty of solving quantum
problems with strongly interacting particles.
One of the most exciting features of cold atoms physics is the ability to experimentally implement
these strongly correlated systems, taking a system where the tuning of an experimental parameter
takes the system from an ideal solvable limit and into the depths of a regime where exact solutions are
impossible and approximate solutions are often incorrect. This is the essence of quantum simulation.
One can then extract information about the solutions to these problems, which can then help
elucidate which approximations (if any) are appropriate in simplifying a theoretical treatment of
the problem, and which theoretical model (if any) provivde a good description of the behaviour of
7
the system,
A simple and commonly studied system is a Fermi gas in a harmonic trap, described by the
Stoner Hamiltonian:
H=
X p2
X
X
i
+
U (ri ) +
V (ri , rj )Si · Sj ,
2m
i
i
(1.1)
i,j
where U (ri ) is the trapping potential and V (ri , rj ) is the interaction potential. If the fermions
are non-interacting (i.e. V = 0), then a simple exact solution for the properties of the gas exists for
any temperature of the gas. Quantum statistical mechanics, in the Thomas-Fermi approximation
(i.e. in the thermodynamic limit N 1, making different canonical ensembles equivalent), predicts
the distribution of the particles in both phase space and in real space, allowing the prediction of
any empirical properties of the gas. One can predict the density of the atoms at any position in
space both while they are trapped and after being released (non-adiabatically) from their trap, as
is done in Appendix D.
Adding interactions to the system makes it significantly more complicated. One can no longer
treat the particles independently, and as the interaction becomes stronger, simple perturbation
theory approaches fail. For spin-1/2 fermions, only particles of opposite spin can interact, and one
intuitively expects a ferromagnetic transition in the gas as the repulsive potential V between the
spin species becomes great enough.
Observation of ferromagnetic behaviour was the claim made in “Itinerant Ferromagnetism in
a Fermi Gas of Ultracold Atoms” [1] by Wolfgang Ketterle’s group. This publication spawned
much controversy regarding the possibility of creating and observing ferromagnetic behaviour in
a degenerate Fermi gas (DFG, i.e. T < TF where TF is the Fermi temperature of the gas) with
repulsive interactions. The parameters that [1] reported on were atom loss rate, kinetic energy, and
chemical potential, with observations qualitatively agreeing with the prediction of the dynamics of
a DFG undergoing a ferromagnetic transition within the Stoner model.
However, there was no direct observation of a magnetic phenomenon in the gas, and an important
question arose regarding the nature of the observed features. One particular point of contention
was the large disagreement between the experimental and theoretical predictions for interaction
strength at which the transition should occur. The transitions observed in [1] occurred at kf a ≈ 2.2
(kF a is a dimensionless interaction strength discussed in the chapter 2 on theoretical background)
whereas calculations in [3] predict the transition to begin at kf a ≈ 0.85. The mean field result for
the transition temperature is kf a = π/2 ≈ 1.57. Many theoretical papers [4, 5, 6, 7] provided other
8
evidence conflicting with the conclusion made in [1].
The experiment we are trying to realize is based around the work by Zhang and Ho [5] and in
Recati and Stringari [3], with the latter work partially based on earlier quantum Monte Carlo work
by Pilati et al.[8]. Considerable discussion of the theoretical aspects of the formation of magnetism
in a degenerate Fermi gas can also be found in Lindsay’s thesis [9, Ch. 4] which is an expanded
version of [10]. Earlier work on itinerant ferromagnetism in Fermi gases was also done by Duine
and MacDonald [11]. A key recent paper is [2], reporting on an experiment in Martin Zwierlein’s
group analyzing spin drag and spin diffusivity.
In [1], the authors provided compelling, but indirect evidence for the ferromagnetic phase transition in their gas of ultracold degenerate (T = 0.12Tf ) Fermi gas. The clearest signature of something
interesting happening with the gas are the low-temperature behaviours of the gas loss rate (which
drops off at “high enough” interaction strengths, contrary to what one would normally expect from
a Feshbach resonance), and the corresponding minimum in the kinetic energy of the gas (without a
ferromagnetic transition, the kinetic energy of the atoms would continue to decrease as the trapped
cloud expanded due to the interatomic repulsion). Much of the evidence, both theoretical and
experimental, accumulated since that publication has contradicted this conclusion.
The goal of our experiment is to both repeat the observations made in [1], but also to find a
“smoking gun”, that is, conclusive evidence either for or against the occurrence of a ferromagnetic
transition in the Stoner model.
1.1
Ferromagnetism
To put the experiment in perspective, we will briefly discuss some of the history of ferromagnetism
and its relation to quantum mechanics. Ferromagnetism was first discovered in iron ores thousands
of years ago, when Chinese scientists realized that small pieces of lodestone (a natural magnetic
ore) would spontaneously point in a north-south direction if allowed to rotate freely. A quantitative
explanation of the phenomenon would however have to await the arrival of quantum mechanics and
the discovery of spin. The prototypical Hamiltonian used for analyzing magnetism is the Heisenberg
Hamiltonian:
Ĥ = −
X
Tab Si,a Sj,b
(1.2)
i6=j;a,b
where Tab is the coupling tensor and Si,a is the ath component of the ith spin. Because of the
9
Pauli exclusion principle, Tab must be diagonal for fermions. Notice that the index i is a summation
over a presumed atomic lattice, where the spins are localized (hence this form of ferromagnetism
is non-itinerant). By assuming isotropic nearest-neighbour interactions, one arrives at the Ising
model:
Ĥ = −T
X
~i · S
~n(i)
S
(1.3)
i
where n(i) are the lattice positions neighbouring i. In one dimension, this problem was solved
by Ising, who discovered that there was no ferromagnetic transition in the model, from which he
concluded that the simple Hamiltonian (1.3) could not explain ferromagnetic phase transitions.
However, any higher dimensional model does undergo a ferromagnetic phase transition, as shown
by Onsager twenty years later. As already mentioned, in these systems, one supposes that the spins
are confined to definite lattice sites.
However, ferromagnetism is postulated to also exist in an itinerant form, in the conduction
electrons of a metallic ferromagnet, which form the prototypical example of a highly degenerate
Fermi gas (for most metals, the Fermi temperature TF ≈ 105 K). The ferromagnetism arises from
the apparent repulsive potential between electrons of like spin due to the exchange interaction. The
exchange interaction is a consequence of the fermionic nature of electrons states causing states with
the electron spins aligned, e.g. |↑↑i, to have spatially antisymmetric wavefunctions reducing the
Coulomb interaction between the electrons. For example, this interaction causes the first excited
state of helium to be one of the triplet states rather than the singlet state.
In order to answer this question, we will simulate the Stoner Hamiltonian (1.1) in a gas of neutral
fermions, such as 40 K atoms. Unfortunately, because the atoms are neutral, the exchange interaction
does not create an interparticle potential. In order to simulate the degenerate electron gas, we need
to create a repulsive interaction between the atoms. This repulsion is achieved using a Feshbach
resonance which creates an effective point-like repulsive potential between the
40 K
atoms. Note
that this potential is short-ranged, similar to the screened Coulomb repulsion between electrons in
a lattice. In both systems, the interaction energy scale is a couple of orders smaller than the Fermi
energy, making our model system similar enough. Simulating the Stoner Hamiltonian in a gas of
ultracold
40 K
will help answer these fundamental questions about both itinerant ferromagnetism
and properties of strongly interacting fermion systems.
10
Chapter 2
Theoretical background
Since the report of itinerant ferromagnetism made in [1], much theoretical progress has been made
in the theoretical analysis of the Stoner Hamiltonian. We begin our discussion with the classic
derivation of the Stoner instability in the local density approximation.
2.1
Stoner instability
An instability in the Stoner Hamiltonian leads to a ferromagnetic transition as the potential energy
lost due to all electron spins aligning begins to compensate for the kinetic energy gained by the
system as the electrons are transfered to the same spin state. This is presented in the model in
Figure 1 of [1].
Suppose the gas described by the Stoner Hamiltonian (1.1) consists of densities n↑ and n↓
respectively of the |↑i and |↓i states. Then, at T = 0, the kinetic energy density per unit volume of
the two spin components of the gas simply equals
Ek =
where EF (n) =
h̄2
2m
6π 2 n
2/3
3
(n↑ EF (n↑ ) + n↓ EF (n↓ )) ,
5
(2.1)
is the Fermi energy of a uniform-density Fermi gas in a single spin
state.
Suppose now that the particles interact via a point-like interaction with interaction potential
V (ri , rj ) =
4πh̄2 a
δ(ri − rj ).
m
In this case, the total potential energy of any |↑i particle equals
the potential energy is similarly equal to
2
4πh̄ a
m n↑ .
11
(2.2)
4πh̄2 a
m n↓ ,
while for a |↓i state,
Thus, the total potential energy of the gas, per
unit volume, is simply
Ep =
4πh̄2 a
n↑ n↓ ,
m
(2.3)
where we only count the contribution of one spin species to avoid double-conting the energy.
Adding into (2.1) and (2.3) gives
E = Ek + Ep =
We define n = n↑ + n↓ and η =
4πh̄2 a
2/3 5/3
3 h̄2
5/3
6π 2
n↑ + n↓
+
n↑ n↓ .
5 2m
m
n↑ −n↓
n↑ +n↓ ,
the gas respectively. Then, we have n↑ =
(2.4)
which are the total gas density and the magnetization of
1+η
2 n
and n↓ =
1−η
2 n.
Substituting into (2.4) gives
πh̄2 a
2/3 n5/3 3 h̄2
5/3
5/3
6π 2
(1
+
η)
+
(1
−
η)
+
n2 (1 + η)(1 − η) =
5 2m m 25/3
3
20
= EF n 1 + η)5/3 + (1 − η)5/3 +
kF a(1 + η)(1 − η) ,
10
9π
E=
(2.5)
where we have used the Fermi momentum for a gas consisting of an equal mixture of two spins
kF = 3π 2 n
1/3
to simplify the expression. Note that the parameter kF a is dimensionless, and that
all the thermodynamic quantities (Fermi energies/momenta) refer to the total density of the gas and
not either of the spin components. Plotting this energy density as a function of η for different values
of kF a produces Figure 2.1. It is evident that the gas has three qualitatively different behaviours at
different values of η. When kF a < π2 , E has a unique minimum for a completely mixed gas: η = 0.
If
π
2
< kF a <
3π
,
27/3
E has two minima for η = ±η0 where η0 is the spontaneous magnetization of
the gas. Finally, if kF a >
3π
,
27/3
the ground states of the system are the two fully magnetized states.
Note that this treatment of the Stoner model uses the results for a free Fermi gas, or at least
one subjected to a constant potential. The energy of each electron is dependent only on the overall
magnetization of the gas, discounting any local effects and any correlation properties. However, in
the case of a harmonic potential (as our trap is, to a first approximation), this approximation no
longer holds and we have to use the local density approximation (where we treat each small subvolume of the overall varying potential as an isolated flat-bottomed box). Making this approximation
is significant, since the local Fermi momentum kF of the atoms is proportional to n1/3 , and the
ferromagnetic transition should first occur at the middle of the trap and only then spread out to
the trap’s edges.
Taking the local density approximation to its logical conclusion and attempting to minimize the
energy functional given in (2.5) is what is done in [9, Ch. 4]. One then obtains the density profiles
shown in Figure 2.2. One then observes the intuitive transition of a partially magnetized gas into a
ferromagnetic state, namely one where the spin components of the cloud separate completely into
12
Figure 2.1: Plot of the energy of a magnetized gas found using (2.5). Legend indicates the value of
kF a. Note that the plotted values include the critical values
π
2
≈ 1.57 and
3π
27/3
≈ 1.87 at which the
gas in the ground state becomes partially and fully magnetized, respectively.
shells (or other domain-like structures) within the cloud. Nonetheless, as discussed in [10], these
results are unphysical; having such sharp gradients of the wavefunction as one would have at these
domain walls would induce a large additional energy cost, and if one is committed to sticking to
the local density approximation
A more thorough analysis of the ferromagnetic instability in the Stoner Hamiltonian has been
performed by [8] using a quantum Monte Carlo simulation to evaluate the energy of the gas as a
function of its magnetization while including local effects and correlations. Their results are shown
in Figure 2.3. It is clear that quantum Monte Carlo simulations predict a ferromagnetic instability
at a significantly (but not orders of magnitude) smaller interaction strength kF a ≈ 1 than does the
local density approximation.
2.2
Ultracold atom scattering
A Feshbach resonance occurs at a point where a perturbing Hamiltonian brings an eigenstate of
the unperturbed Hamiltonian for two free atoms into energy degeneracy (at least in the first order
of perturbation theory) with an eigenstate of a weakly bound molecular state. Note that two
fermions in the same spin state cannot interact and hence Feshbach resonances in Fermi systems
occur between two particles of opposite spin (or rather, since we are using the F = 9/2 hyperfine
state of
40 K,
the resonance is between the mF = 9/2 and mF = 7/2 state. The molecule and the
13
Figure 2.2: Local density approximation profiles for an interacting Fermi gas, from [9, Ch. 4].
Note that kF0 refers to the Fermi momentum of the particles at the centre of the trap, where n is
highest. The original caption reads: “LDA density profiles for various interaction strengths. The
numerical solutions of n↑ (r) (blue) and n↓ (r) (green) are shown for increasing interactions, with
equal populations in each spin state (N↑ = N↓ = N/2; η = 0). Dashed black lines indicate the
kF0 a = 0 noninteracting solution; gray dashed lines indicate the non-interacting solution for all
particles in the same spin state (N↑ = N ; N↓ = 0). Interaction parameters indicated in panels.”
14
Figure 2.3: Quantum Monte Carlo simulations performed in [8] predict the onset of partial ferromagnetism in the model we are considering at kF a ≈ 0.85 and full ferromagnetism at kF a ≈ 1,
depending on the precise nature of the interaction potential between the atoms. An interesting note
on the difficulty of performing simulations of quantum problems is the fact that these simulations
were performed using 33 particles.
atoms have different total magnetic moments (because of a change in angular momentum), and so
when placed in an external magnetic field, one can manipulate the relative energy of the two states
(bound and unbound) by tuning the external magnetic field.
For the gas of
40 K
atoms that we are using, we observe s-wave Feshbach resonances at B =
0.02016(6)T and B = 0.022421(5)T [12], with magnetic fields smaller than the resonance conditions
corresponding to an effective repulsive potential. In addition, there is a magnetic field between
these two resonances where the scattering length and hence interaction potential go to 0.
The magnetic field strengths corresponding to Feshbach resonances are generally determined
via experimental searches, with some theoretical work predicting new resonances within different
isotopes of an element where the resonances for one isotope are known.
The reason why the Feshbach resonance creates an effective interaction between the atoms
is subtle and can be best understood in the context of scattering theory. Basically though, the
existence of a molecular state resonant with the atomic state of two free atoms increases the matrix
element for the transition between different momentum eigenstates of the relative motion degree of
freedom of the two atoms.
15
2.3
Modeling the experiment
In this section, we describe some simple modeling of the experiment proposed in [3]. The experiment
consists of taking a spin-mixed Fermi gas, slightly displacing the two spin components with respect
to each other, and measuring the resulting relative oscillation frequency of the two spin components.
In order to model the experiment (even to first order), we would have to combine the local density
quantum Monte Carlo results in [8, 3] with the spin texture results of [10]. We reproduce the key
figures of [3] in Figure 2.4.
Figure 2.4: Left: Figure 1 from [3], showing the inverse susceptibility of a Fermi gas in a harmonic
trap, calculated using Monte Carlo simulation. Right: Figure 2 from [3], showing the expected
dipole oscillation frequency for a Fermi gas with the spin components slightly separated.
As a first step, we qualitatively reproduce the results in [3], but extending the local density
approximation of Recati to the entire cloud even after the local onset of ferromagnetism. The
validity of this approximation is questionable, since the onset of ferromagnetism in the centre of the
trap would affect the density of atoms in other parts of the cloud, as well as significantly affecting
the spin transport properties of the cloud.
For simplicity, assume a spherical harmonic trap at temperature T = 0. Then, we have V (r) =
1
2 2
2 mω r .
Now, n ∝ (µ − V )3/2 (3/2 because the phase space available to free particles of quadratic
√
3
dispersion scales that way with the energy; kmax ∝ Emax ; n ∝ kmax
). Note that at T = 0
the Thomas-Fermi approximation cannot hold, so this calculation is somewhat contrived, but if
TF /N 1/3 T TF , as is always the case when we are working with a degenerate Fermi gas, then
16
the approximation is valid. For the harmonic potential, this becomes
n(r) = n(0)
µ − V (r)
µ − V (0)
3/2
r 2 3/2
= n(0) 1 −
R
(2.6)
where R is the cloud radius. Inverting this relationship gives
r(n) = R 1 −
n
n0
2/3 !1/2
(2.7)
So the volume occupied by regions with density ≥ n is
1−
V (n) = V0
n
n0
2/3 !3/2
(2.8)
which means that
dV
=
dn
1−
n
n0
2/3 !1/2 n
n0
−1/3
(2.9)
This function is plotted in Figure 2.6.
Figure 2.5: Plot of radial atom density profile in a spherical harmonic trap found using (2.6).
In [3], Eq. 13 gives the oscillation frequency of the two spin components of the gas relative to
each other, assuming that ferromagnetism has not yet set in:
2
ωSD
=
N
m drz 2 χ(n)
R
17
(2.10)
Figure 2.6: Plot of volume differential with respect to density in a spherical harmonic trap found
using (2.9).
Figure 2.7: Plot of mean inverse susceptibility in a harmonic trap found using (2.11). The vertical
black bar indicates the value of a for which the centre of the trap begins exhibiting ferromagnetism.
18
This expression drops to 0 as soon as ferromagnetism sets in anywhere, because the model
used to derive this equation assumed both spin components oscillate as cohesive whole, and the
appearance of a ferromagnetic centre destroys this symmetry of the oscillation.
A crude functional form for the inverse susceptibility as a function of x = kf a is just 1/χ(x) =
2 − x − x2 − x3 for x < 0.81 and 1/χ(x) = 0 for greater x. This functional model incorporates
various aspects of the real function presented by Recati: it’s a decreasing function of at least 3rd
order, but its derivative does not diverge at the ferromagnetic transition point.
Brashly disregarding the limitations of our model, we find the local density approximation value
for the inverse susceptibility of the gas,
Z
dr
=
χ(n)
Z
dn(dV /dn)
χ(n)
(2.11)
producing Figure 2.7. Note that the shape of the figure above is remarkably similar to [3, Fig
2.], and that nothing overly exciting happens when the centre of the trap becomes ferromagnetic the curve of mean inverse susceptibility (and thus plausibly the oscillation frequency) merely has
an inflection point.
Therefore, there is actually no sharp transition in the oscillation frequency as a function of
interaction strength in the local density approximation. Eventually, the frequency will decay to 0,
but because the density at the edges is small, they will never become fully magnetized and hence
will continue oscillating.
2.3.1
Extracting the susceptibility as a function of interaction strength
As we can see, the mean inverse susceptibility as a function of the interaction parameter a is not
directly related to the susceptibility of the gas, because of averaging effects over the volume of the
gas. This raises an interesting deconvolution-type issue in attempting to reconstruct the actual
dependence of susceptibility on the interaction parameter. In (2.10), we should know the number
of atoms located at each density, i.e. (dV /dn), from which we should be able to find the form of
χ(n) that is consistent with the observed distribution. Assuming spherical symmetry we have
N
=
2
mωSD
Z
drz 2 χ(n(r), a)n(r) =
Z
χ(n(r), a)n(r)r4 cos2 θ sin θdφdθdr =
4π
3
Z
1
χ(n 3 a)n(r)r4 dr
(2.12)
19
I doubt there is an easy way to extract χ from such an integral. The most likely thing to
P
succeed here will be to assume 1/χ(x) = χ−1
1 − i ci xi , i.e. a polynomial, and then attempt
0
to extract a few coefficients ci . It seems that the fit to the Monte Carlo data in [3] is a fairly
low-order polynomial, and so this polynomial model is not catastrophically bad. In addition, Recati
gives the analytic expression for the first and second-order terms of χ(x), so we could compare our
measurements to these values.
2.3.2
Spin oscillations in the absence of ferromagnetism
To truly prove that we have observed ferromagnetic behaviour, we need to show that there is no
plausible alternative explanation for the observed data. However, with the method proposed in [3],
this may not be so easy. For example, consider Figure 2.8. As can be seen, the difference between
the true and the quasiferromagnet is essentially negligible in the non-ferromagnetic region (as can
be expected), and only slowly do the behaviours diverge even at larger interaction parameters.
Thus, our ability to show conclusively that the gas becomes ferromagnetic, even given the mean
susceptibility measurements, may be very limited.
In [2], the authors give the results of their measurements of the spin susceptibility for a gas
at unitarity (kF a → ∞). We can test their measurements by extending the oscillation frequency
analysis in [3] to the apparently non-ferromagnetic gas case. In particular, if the spin susceptibility
measurements in [2] are applicable in our system (i.e. in the degenerate limit) then we should observe
a saturation of the oscillation frequency of the cloud at kF a ≈ 1 a frequency that significantly exceeds
the ω ≈ ω0 shown in [3, Fig. 2]. Depending on the exact nature of the quantum limit to the spin
susceptibility, the oscillation frequency curve shown in Figure 2.4 will be significantly different from
the experimentally observed values. Regardless of what our experimental results show, we can only
confirm one of these two results (unless the two reported results actually refer to different physical
situations, which actually seems quite likely).
2.3.3
Thoughts about Fermi gas oscillations with spin textures
In their paper, Recati and Stringari discuss magnetization susceptibility as a function of the atom
density n and then use a local density approximation to calculate the oscillation frequency for
interaction strengths weak enough for ferromagnetism not to set in. In light of Lindsay’s calculations,
it is evident that the behaviour of the Fermi gas above the transition interaction strength will be
more complex than the simple extension of the low-field oscillation frequency results in [3, Fig.2].
20
Figure 2.8: Plot of mean inverse susceptibility in a harmonic trap found using (2.11). The quasiferro
(top) line shows the mean susceptibility for a material that has its local susceptibility saturate at
10 times its non-interacting value.
My intuition says that once (if) ferromagnetism sets into the cloud, then we will see an increase
in the damping of oscillations that should be roughly proportional to the average degree of local
magnetization in the gas (on top of an increase in damping with increased interaction). The oscillation frequency will not fall by much, but because of the extra damping, it will become less
well-defined with greater values of the interaction parameter. The spatial inhomogeneity will cause
some very interesting patterns in the time-of-flight images, as the atoms slosh around in the trap
and try to return to equilibrium.
To further investigate what this experiment, we need to combine the ideas of spin textures with
the ideas from Monte Carlo simulations to find a better estimate for the high-interaction ground
state of the trapped atoms. Additional considerations come from local correlation behaviour and
We then need to figure out whether the atoms will do anything funky when we shake them, and
then we can predict the atoms’ behaviour (and try to check it experimentally).
21
Chapter 3
Experimental possibilities
There are several experimental investigations that we would like to perform using our system of
ultracold potassium atoms to test both the results of previous experiments and the predictions of
theoretical models.
The priority list for these experimental possibilities, at the present, seems to be as following:
1. Molecular loss processes, based on [5] and [7].
2. Spin susceptibility measurements, based on the “Stringari proposal” [3].
3. Replicating the MIT experiment [1].
4. Spin gradient model investigations [10].
5. Investigations based on other theoretical proposals, i.e. [4]
The order of the sections is largely historical in nature, reflecting our own thinking about the
experiment as time progressed.
3.1
MIT experiment
One result that we can (and probably want to) check is the result of the MIT experiment. The
two clearest signatures that were interpreted as ferromagnetism in that experiment were seen in
the atom loss rate and the kinetic energy of the clouds as function of the interaction parameter
kF a. The loss rate exhibited a characteristic peak at kF a ≈ 2.2, with a corresponding minimum
in the kinetic energy at kF a ≈ 2.2. Once our experiment is working, we can easily replicate these
22
measurements by creating a cloud of atoms in a spin-mixed state, quickly turning on the Feshbach
field to a given interaction strength, waiting for some amount of time (around 10ms, if we follow the
protocol used in [1]), quickly ramping the Feshbach field to either a zero-interaction point or to zero
field, and then performing time of flight imaging. The number of atoms remaining as a function of
time will give us the loss rate, while the size of the imaged cloud, if in the long time-of-flight limit,
will give the kinetic energy of the original cloud.
Replicating the experiment in [1] has both benefits and disadvantages. The benefit is that
there is already a published result that we could compare to, either confirming or refuting their
observations. In addition, if our experiment works reliably, we could try repeating the experiment
with different ramp types/shapes for the Feshbach field as a way of assessing the effect of molecular
formation in [1] which has been a major route of attack against those results. The disadvantages of
performing this experiment are first, that it’s already been done (so it’s not very original) and that
neither signature is magnetic in nature (so this result could be questioned just like [1] was).
Nonetheless, I think that this is an essential investigation to perform. The experimental results
in [1], while impressive, have large error bars and the paper did not present a thorough investigation
of the low-interaction regime, where theoretical models are expected to be accurate; even the local
density approximation should do reasonably well until kF a ≈ 0.5. Collecting more data would
be really useful. Another reason to repeat the experiment is to simply serve the “repeatability”
requirement of scientific results, and to ensure that the results in [1] were indeed due to a property
of the fermion gas and not an experimental quirk.
3.2
Stringari proposal
The experiment described in [3] was the original idea for our experiment because it allows a direct
measurement of a magnetic property (the susceptibility) of the Fermi gas. Some more theoretical
thoughts about this proposal are discussed in section 2.3. The experiment consists of creating a
cloud of identical fermions below their Fermi temperature in a mixed spin state (i.e. in a mixture of
|↑i and |↓i states). with the Feshbach field set to a small interaction parameter value. By quickly
ramping the magnetic field, we create the desired repulsive interaction between the different spin
states using the Feshbach resonance, and allow the atoms to settle into their ground state. A brief
application of a spin-dependent potential (which will be a combination of an optical gradient and a
magnetic gradient to ensure there is no net average force on the atoms) imparts different quantities
23
of momentum to the different spin components. The relative momentum between the atoms forces
the atoms to oscillate relative to each other. After a short (variable) time of the order of the inverse
trap frequency (again, ≈ 10ms), the atoms are released from their confining harmonic trap and
imaged. The primary goal of observation will be monitoring the oscillation frequency (in time-offlight imaging space which will be largely similar to momentum space) of the two spin components
of the cloud as they relax to equilibrium. The secondary observed parameter will be the damping
of the oscillations.
This experiment has several advantages. Most importantly, it is a direct measurement of a
magnetic property of the gas, so it should give a direct indication of any ferromagnetic behaviour
exhibited by the gas. This experiment has never been done before, so whatever we see, it will be
exciting and new. The papers [8, 3] make predictions regarding the behaviour of the gas using
quantum Monte Carlo simulations: our experiment would be either a validation or refutation of the
technique, which has long been a matter of contention between different theory groups studying
ultracold gas systems. The predictions made in [3] regarding the spin susceptibility of this system
are in contradiction with the experimental results of [2], and so measuring the spin susceptibility
using the method in [3] will either confirm their model for this system, or refute the interpretation
made in [2]. The measurement of an oscillation frequency is also much easier and more robust than
the measurement of kinetic energy or atom number, and is unaffected by any imaging distortions
or residual magnetic fields.
I think that we should certainly try to perform this experiment, because of its robustness and
because of the impact it could have in reconciling several theoretical [3, 13] papers with the experimental result in [2].
3.3
Spin textures
If the ferromagnetic state can indeed form in this system, and if its behaviour can be described by
the spin texture model in [10], then we can perform several interesting experiments with the system.
3.3.1
Playing with springs
If the system is in a ferromagnetic state, then each magnetic “domain” is resistant to distortions of
its spatially uniform magnetization. One way to interact with this system is to apply a magnetic
spin gradient in a direction perpendicular to the magnetization direction of the domains. Then,
24
different atoms (at different positions in the trap) will experience different Larmor precession rates,
effectively producing a relative torque on different spin components. In a very intuitive picture of
this process, the ferromagnetic state (and any signatures it exhibits) will persist in weak gradients,
but will get destroyed by stronger gradients that will “break up” the domains. Taking an even bolder
interpretational step, we might be able to make a very interesting phase diagram of “ferromagnetic”
vs. “paramagnetic” states on a kF a vs spin gradient two dimensional phase plane.
Another way to test this model without actually delving into ferromagnetism proper would be
to prepare the gas in a single spin state, apply a magnetic field gradient, remove the gradient slowly
and measure the total magnetization of the gas as a function of the gradient strength and interaction
parameter. If the spin gradient model is accurate, then if the gradient is not too large, any twist
in the magnetization of the cloud will be reversed when the gradient is removed, producing a fully
magnetized gas again. However, if the gradient was strong enough to mix up the magnetic
Another idea would be to again prepare a single spin state cloud, and apply a magnetic field
gradient. The gradient would then be turned off suddenly, leaving the “spring” in a twisted state.
If the gradient was weak and the ferromagnetic state was not destroyed by the gradient of the
magnetic field, then the cloud will undergo periodic twisting and untwisting of the magnetization
of the cloud (like a torsion spring). If we now measure the magnetization as a function of time after
the release of the magnetic field gradient, then the magnetization will experience periodic spikes as
all the atoms return to the original spin orientation. If on the other hand the fully aligned state
was destroyed by too strong of a gradient, then the resulting magnetization of the gas will always
just be 0.
3.3.2
Time of flight signatures of ferromagnetism
When the gas of
40 K
atoms undergoes a ferromagnetic transition, intuition tells us that something
interesting should happen in regard to the spatial distribution of different spin components. One
thing that limits our ability to see the effects of macroscopic ferromagnetic domains in the trap is
the rapid loss of atoms when the system is held near its Feshbach resonance. However, as observed
in [1] and discussed by [10], the atom loss rate should experience a maximum at the ferromagnetic
transition and fall rapidly at stronger interactions. If we can tune the magnetic field to be close
to the Feshbach resonance and create very strong repulsive interactions between the
40 K
atoms,
then we might be able to achieve trap lifetimes sufficient for observing time-of-flight signatures of
magnetic domains. If the ferromagnetic state is metastable and the lifetime of the ferromagnetic
25
state is sufficiently long, then the atoms really will spatially segregate in the trap.
One obvious signature of a ferromagnetic state would be a difference in the total kinetic energies
of the two spin components after a Stern-Gerlach separation of the two clouds; if one spin component
occupies the centre of the trap, with the other spin component forming a crust on top of this core,
then the core will have more kinetic energy per particle than the crust, producing a clear time-offlight signature of the ferromagnetic transition.
In order to analyze more complex ferromagnetic states, we will do some simple semi-quantitative
modeling. Suppose spherical symmetry holds, as does the local density approximation. Then, while
trapped, the atoms will have a spatial distribution in the Stern-Gerlach basis {|+i , |−i}:
n± (r) = n(r)
1 ± m(r)
2
(3.1)
In terms of phase space, we note the fact that the effective potential V induced by the Feshbach
resonance does not couple to the momentum of the particles (e.g. the Hamiltonian for the two
spins still has the usual simple form H = p2 /2m + V (r)), the density in phase space is directly
proportional to the density in position space. Using (D.5), we can define separate potentials V± (r)
for the two spin components:
e
µ± −V± (r)
kB T
3
= −Li−1
3/2 −λdB n± (r)
(3.2)
This immediately gives the usual phase space densities we see in (D.2):
1
f± (r, p) =
(3.3)
2
e
p
( 2m
+V± (r)−µ± )/kB T
+1
After time of flight t, we as usual have
1
(3.4)
e
+1
Now we locally integrate over momentum space, neglecting the effect of the Stern-Gerlach meaf± (r, p, t) =
p2
( 2m
+V± (r− pt
)−µ± )/kB T
m
surement which will serve to spatially separate the two spin components, adding a term like ±tĈ
to the two wave packets:
Z
Z
dp
n± (r, t) =
2
e
p
( 2m
+V± (r− pt
)−µ± )/kB T
m
dp
=
+1
26
−
βp2
e 2m
Li−1
−λ3dB n± (r− pt
)
m
3/2
(
(3.5)
)
+1
This expression is almost certainly non-integrable. We can use numerical methods to do approximations and modeling. One limiting case is obvious: if t = 0, then (3.5) clearly reduces to a
tautological n± (r, 0) = n± (r, 0) using polylogarithm integration akin to the ones in Appendix B.
The other limiting case of t → ∞ is less clear, and will be a subject of future study.
Simulating the expansion numerically might be troublesome because the phase space from which
one starts is 6-dimensional (and since the integrals are not tractable, this can’t be reduced). However, a simple Monte Carlo approach should yield excellent results (although might be useless for
fitting because it would be generally too slow). In particular, one would assume a chemical potential
µ for the gas, and then pick phase space cells at random and occupy them with the appropriate
−1
probability 1 + eβµ−βmathcalH(r,r) . Once the requisite number of particles has been placed in the
phase space, one can allow them to evolve in phase space according to the ballistic approximation,
and then the spatial density of the atoms at a later time can be computed by binning the atoms
into spatial bins in either 2 or 3 dimensions (probably 2, to obtain correspondence with time of
flight images).
3.4
Molecular reaction rates
As reported in [5] and in [7], molecular formation will be an important process in a Fermi gas system
near a Feshbach resonance of fermions.
3.5
Other signatures of ferromagnetism
In addition to the above ideas, we have considered other possibilities for experimental investigation of the possible ferromagnetism seen in the strongly interacting gas of fermions that we are
invevstigating.
3.5.1
Protected coherence
Analogously to the above ideas of using a spin magnetized cloud as a spring, one can imagine that
in a ferromagnetic state, the cloud will be resistant to dephasing between different parts of the
cloud, enabling a protected coherence between the |+i and |−i states with respect to the external
magnetic field basis, such that the atom cloud will not enter a mixed magnetization state, even
after waiting for a while.
27
The experimental idea here would be to perpare the cloud in a spin magnetized state |+i near
the Feshbach resonance and then perform a spin echo experiment: send a π/2 pulse into our atoms
to induce them to be in a superposition of |+i and |−i states. Then, we wait for some time τ ,
apply a π pulse, wait another time τ , and finally apply a π/2 pulse. If the atom spin state is indeed
“protected” from decoherence, the entire cloud’s magnetization will precess equally during the two
τ intervals and hence once the measurement is done, all the atoms should be again in the original
|+i state.
Ferromagnetism will promote this protection of the coherence; we can then plot the magnetization of the cloud after the spin echo procedure as a function of τ for different values of kF a, and
see if there are any sharp features around the predicted values of the ferromagnetic transition.
3.5.2
Calorimetric signatures
Some papers [11] predict that for T < 0.2Tf , the itinerant ferromagnetism in fermions is a 1st
order phase transition, and hence should have an associated latent heat. This means that the
temperature of the cloud will increase if we ramp the system into a ferromagnetic state when its
temperature is less than some critical value in a qualitatively different way than if the gas was above
this temperature.
3.5.3
Spin imbalanced gases
Using a partially magnetized gas as the initial state of the system will enhance the formation of
spontaneous magnetization (after all, a fully magnetized gas is inherently ferromagnetic). This
imbalancing may make other investigations less difficult.
3.5.4
Local magnetization
If we fail to observe macroscopic domains, we need to think about possible ways to distinguish
between a non-ferromagnetic and a locally anti-correlated state, as presented in [4].
28
Chapter 4
Sub-projects
My involvement in the lab has consisted of two main components: helping out with day-to-day lab
running (installation of blackout curtains, optics alignment, and countless other small tasks), as
well as two side-projects that will soon become part of the main experimental apparatus.
4.1
Microwave evaporation
The idea of using microwaves for evaporative cooling is entirely analogous to the RF evaporation
already employed. In order to perform the evaporation, we have purchased an Anritsu MG37022A
microwave source, which should provide the spectral purity and switching speed necessary for successful evaporation.
The basic premise of microwave evaporation in a magnetic trap is fairly simple: the 87 Rb atoms
that we want to evaporatively cool have a hyperfine transition from the F = 2 state to the F = 1
state, with a frequency approximately equal to 6.8GHz. We prepare the trapping magnetic fields
(and Ioffe-Pritchard trap, which is essentially a quadrupole trap with an extra coil to ensure there
is no spin-rotating magnetic field zero) in such a way that the magnetic field increases as a function
of the distance from the centre of the trap. States with positive gF will then be trapped, while
states with negative gF will be anti-trapped. By adding microwave radiation resonant with the
F = 2, mF = 2 → F = 1mF = 1 at locations in the trap that are distant from the centre of the
trap, only the hottest atoms (i.e. the ones that can reach parts of the trap far from its centre)
will undergo the microwave transition and get kicked out of the trap. The remaining atoms will
rethermalize, cooling down. Since each leaving atom carries significantly more energy than the
average atom (the evaporation is a slow process lasting several seconds), the gas can be cooled by
29
a factor of 100 while losing only about 90% of the atoms.
4.1.1
Overview of the Anritsu MG37022A
This microwave source is significantly more capable than is necessary for our experiment and thus
should offer further applications we might not have yet thought of. In particular, it has an extremely
narrow and pure spectral signal, with linewidth well below the resolution of any spectrometer we
could get our hands on (¡200Hz for sure). On the other hand, we discovered some limitations in
regard to coding the generator, which may limit its uses to us, as discussed below.
We have a physical copy of the Anritsu MHG37022A operations manual that can also be found
at http://www.anritsu.com/en-GB/Downloads/Manuals/OperationsManual/DWL3176.aspx. In
addition, we have a physical copy of the programming manual founda at http://www.anritsu.com/
en-GB/Downloads/Manuals/ProgrammingManual/DWL3177.aspx. These documents, as well as several others, can be found at http://www.anritsu.com/en-GB/Products-Solutions/Products/
MG37020A.aspx.
4.1.2
Programming the Anritsu MG37022A
The Anritsu MG37022Asignal generator can be programmed by sending it a sequence of strings in
the standard SCPI programming language. The communication protocol is conveniently taken care
of by LabWindows and the ivi.h library.
In particular, the viPrintf command in the ivi.h library communicates the string argument
to the generator, making it busy for a fraction of a second. However, even when the communication
is done, the signal generator can be internally busy, updating internal tables. This seems to be a
serious inherent flaw in the signal generator’s design Note that for whatever reason, the command
“viWrite” in the IVI library (ivi.h) should not be used, because it locks up the program running
on the computer as well as the signal generator itself.
The most annoying (but probably also the most beneficial) feature of programming in LabWindows is the extremely stringent adherence to data type definitions. Even the simplest types (e.g.
int) have multiple aliases, such as viSession (which is the integer key assigned to a newly opened
instrument). The main library to use for communicating with peripheral instruments over Ethernet
(or USB) is ivi.h.
The biggest hurdle we have faced with programming the Anritsu MG37022Ais the difficulty of
programming it to perform frequency (and power) sweeps. The signal generator only has a stepped
30
sweep mode (i.e. no continuous output while sweeping). There is an internal linear frequency (but
not power) sweep generator; for everything else, one needs to use the “list sweep mode” where
the individual steps of the sweeps have to be uploaded to the signal generator as a long string.
Moreover, for some reason as of yet not understood by us and not explained by Anritsu, when one
tries to program the internal power settings of the signal generator, the internal updating process
takes a really long time (40 seconds for 10000 points, and proportionately less for smaller lists).
We spent a long time trying to figure out what was wrong and how we could fix it, ultimately giving up and purchasing an external voltage varriable attenuator, while running the Anritsu
MG37022Ain constant power mode.
Some sample commands (in particular, the commands that might be useful to us), and their
function are in Table 4.1.
Table 4.1: Table of sample commands for the Anritsu MG37022A. The commands must be exactly
as shown, including spaces and especially colons.
Command
Arguments
Function
[values]
:DISPlay:REMote S
:SOURce:FREQuency:CW XMHz
S
Enable/disable MG37022A display updat-
[ON/OFF]
ing when values are remotely updated
X [float]
Set the source to continuous wave output
and set the frequency to X MHz
:SOURce:FREQuency:STARt
X [float]
Set the source to frequency ramp output
and set the ramp start frequency to X
XMHz
MHz
:SOURce:FREQuency:STOP
X [float]
Set the source to frequency ramp output
XMHz
and set the ramp end frequency to X MHz
:TRIGger:SOURce HOLD
Do not trigger any sweeps: continue with
current sweep and then remain in CW
mode at the last frequency.
:TRIGger:SOURce EXTernal
Trigger a sweep based on input to connector #9 at the rear
:SOURce:LIST:INDex N
N [int]
Set the current position within the list to
N (for writing values to the list)
31
:SOURce:LIST:FREQuency
Xi [float]
X1 MHz, X2 MHz, . . ., Xn MHz
Set the frequency values in positions N,
N+1, . . ., N+n-1 within the internal list
to the set Xi MHz
:SOURce:LIST:POWer
X1 dBm,
Xi [float]
X2 dBm, . . ., Xn dBm
Set the power values in positions N, N+1,
. . ., N+n-1 within the internal list to the
set Xi dBm. Note 0dBm = 1mW power;
each change of 10dBm is a factor of 10 in
power.
:SOURce:LIST:STARt N
N [int]
Set the start of the list sweep mode sweep
to N
:SOURce:LIST:STOP N
N [int]
Set the end of the list sweep mode sweep
to N
Despite many hurdles and confusions with regard to incorporating the signal generator into the
apparatus, the generator can now be controlled from the central experimental control interface.
4.1.3
Sample code
In this section, we present an excerpt from the function AnritsuCOMMUNICATE found in the
source file AnritsuCommands that contains the communication commands used in controlling the
microwave generator. Note that the ivi.h library must be included in the source file containing
these lines in order for the viPrintf command to function properly.
1
//Use a list as the source for its frequency and power sweep settings
2
viPrintf(VIsess,":SOURce:FREQ:MODE LIST\n");
3
//Trigger sweeps using the external trigger (input on back panel)
4
viPrintf(VIsess,":TRIGger:SOURce EXTernal\n");
5
6
//Use both frequency and power lists
7
viPrintf(VIsess,":SOURce:LIST:TYPE FLEVel\n");
8
//Enable microwave output
9
viPrintf(VIsess,":OUTPut:STATe ON\n","");
10
//Start sweeps of the list on the first element (indexed by 0)
11
viPrintf(VIsess,":SOURce:LIST:STARt 0\n");
12
//End sweeps of the list on the last element (indexed by 10000)
32
13
viPrintf(VIsess,":SOURce:LIST:STOP 10000\n");
14
//Spend the given amount of time (0.1ms here) on each list element
15
viPrintf(VIsess,":SOURce:LIST:DWELl 0.1ms\n");
16
17
/*
18
some code to generate a list of frequencies and powers that the signal
19
generator will output, formatted appropriately as:
20
:SOURce:LIST:FREQuency []MHz, []MHz, ... , []MHz
21
and
22
:SOURce:LIST:POWer []dBm, []dBm, ... , []dBm
23
*/
24
25
//msg is the string with the list of frequencies
26
viPrintf(VIsess,msg);
27
28
//Reset the index in the list to which we are writing to 0
29
viPrintf(VIsess,":SOURce:LIST:INDex 0");
30
//And send the power list string on its way
31
viPrintf(VIsess,msg);
4.1.4
Microwave circuit
Integrating the Anritsu MG37022Ainto the apparatus proved challenging because of several previously unknown factors about its behaviour, as described above. Nonetheless, treating the signal
generator operationally as a constant-power source of microwaves at different frequencies, we constructed the circuit shown in 4.1, which successfully delivers enough microwave power to the chip
(and hence atoms) to evaporatively cool them.
4.1.5
Microwave calibration
To assess the performance of our microwave system and the circuit in Figure 4.1, we performed
several calibrations.
The first thing was to calibrate the response curve of the diode we are using to measure all
other microwave powers. We connected the Anritsu MG37022Ato the diode and varied the output
power at a constant frequency of 6.83GHz, producing Figure 4.2. The voltage-to-power and powerto-voltage conversion curves, with corresponding cubic interpolation functions are shown in Figure
4.3.
33
Figure 4.1: The circuit we constructed for microwave evaporation. The Anritsu MG37022Aproduces
a frequency sweep (at about a -30dBm power level) which is then modulated in amplitude by a
voltage variable attenuator (-4dB to -30dB). The MSH-5727901 amplifier boosts the microwave
power by about 57dB. The microwaves can then either be sent into the chip, or reflected from the
TTL switch and absorbed by a 25W impedance-matched terminator. A -30dB directional coupler
combined with a diode allow us to monitor the reflected power (which is proportional to the input
power to the chip if the TTL switch is closed). A bias tee combines a DC signal (which is used to
keep the chip temperature constant) with the microwaves, and together they pass through the dark
purple wire on the chip. The transmitted microwaves are finally absorbed by another high-power
terminator.
34
Once the diode was calibrated, we proceeded to measure the gain characteristics of the MSH5727901 amplifier. We again used 6.83GHz as the baseline frequency. By measuring the output
power from the amplifier as a function of the input power, we could determine the gain of the
amplifier (Figure 4.4). As a final test of the system, and to see how much of the microwave power
was getting to the chip, we measured the reflectivity and transmittivity of the chip to microwaves
(Figure 4.5), finding that the total loss for the microwaves going through the chip is about 37dB
meaning that about 1.4% of the input power makes it to the chip.
We are currently working on optimizing the microwave evaporation routine, perhaps combining
it with RF evaporation (as well as microwave evaporation for
40 K
using its hyperfine transition
around 1.2GHz) to obtain the greatest possible number of the coldest possible atoms.
Figure 4.2: Output signal of the diode as a function of the input power (both axes are log scales).
Blue points were gathered without an isolator on the output of the Anritsu MG37022A, while the
red points were collected with one.
4.2
Image analysis
The most detailed probe of the behaviour of ultracold atoms is either fluorescence or absorption
imaging. In either case, one is left with a 2-dimensional image, with the optical density (logarithm
of the proportion of light transmitted). One then needs to fit the observed image to some model
35
Figure 4.3: Left plot gives a conversion curve between the power input into the diode and the output
voltage, while the right plot gives the generally more useful voltage to power conversion curve. The
cubic interpolation curves are certainly good between the lowest and highest collected data points
(-30dBm to +20dBm) but should be used cautiously outside of this interval.
Figure 4.4: Left plot gives the raw data for the output power of the microwave amplifier as a
function of the input power, while the right plot shows the corresponding gain. As can be seen, the
gain is nearly constant at 57dB until the output power of the amplifier saturates at 43dBm, which
equals 20W.
36
Figure 4.5: These plots show that the chip transmission and reflection properties are essentially
independent of the input power. However, note that the reflectivity of the microwaves from the
chip depends on the input frequency quite strongly for reasons that we do not yet understand.
37
(as discussed in greater detail in Appendix A).
A recent report by Ketterle and Zweierlein [14, Ch. 3] discussed functions for fitting time of
flight expansion images. In analyzing the images that result from the relative oscillation of the two
sub-clouds, we will have to determine the appropriate fitting functions and extract as much useful
information as possible from the clouds. The absolute minimum of information that we require
is the exact position of the centres of the two atom clouds (|↑i and |↓i), which would give us the
average momentum of the two spin components, which then would allow us to extract the relative
oscillation frequency of the two components, as well as damping and other kinematic parameters.
In addition, the time of flight images contain information about the temperature, internal energy,
and possibly other parameters of the gas.
In [14], the authors suggest using the fitting function for the cloud shape of a non-interacting
expanding Fermi gas in order to fit expansion images even for an interacting gas, and then interpreting the fit parameters as “effective” parameters, e.g. effective temperature and chemical potential.
In [14, Fig. 18], they show that this method produces close fits to the observed cloud shapes.
Figure 4.6: Illustation of the raw images obtained from the camera when using absorption imaging
on
40 K.
Note the very prominent interference fringes in the raw data; phase drift of these fringes
between exposures is a potentially major hurdle for quantitative image analysis. This image does
not use background subtraction.
38
Figure 4.7: An absorption image (ratio of light with atoms/light without atoms) of ultracold
40 K
after an 4.6ms expansion. This sort of image (and any parameters we can extract) are ultimately
the only way we can probe the trapped atoms. This image does not use background subtraction.
Figure 4.8: First image shows a time of flight image (expansion time = 4ms) of a cloud of 40 K, while
the second image is the background image taken a few milliseconds later when all the atoms have
fallen out of the imaging area. Note that these two images use background subtraction, as identical
images were collected without trapped atoms, and then the intensity of both the atom image and
of the background image are normalized to those images.
39
Figure 4.9: Left part shows the optical density found from the atom part of the fram in Figure
4.8, central part is the fitted Fermi function given in (D.13), while the right part are the residuals
multiplied by 10. The fit predicts that the atom cloud contained 29400 atoms at a temperature of
T /TF = 0.54. Note that an absolute temperature cannot be estimated because the trap frequencies
were not measured when these images were collected.
Algorithms
A more in-depth discussion of the algorithms used below are given in section A.2.
After this, we can proceed as usual and implement the expression in (D.13) as a fitting function.
In order to assess the algorithm in terms of both performance and accuracy, we perform a Monte
Carlo (e.g. lots of random trials looking for a pattern) simulation of the algorithm on simulated
noisy data. We generate images of the form given by eq. (D.13), add variable Gaussian noise (which
should simulate both atom and photon noise, at least in a first approximation), perform a Fermi
function fit, and then measure the deviation in measured temperature and atom count from the
“real”, original functions. Note that Marcius’s thesis[15] provides some discussion of these fitting
issues in the appendices. A sample fit to a real Fermi cloud is shown in Figure 4.9.
The first set of tests is performed on a 61 × 61 pixel image, with a cloud with parameters
A = 1, O = 0.1, x0 = 3, Rx = 10, y0 = 3, Ry = 15. We choose consistency in these parameters and
only vary q and the signal/noise ratio in order to see the accuracy of the algorithm in determining
the thermodynamic parameters N (number of atoms, proportional to Rx Ry A) and T (temperature,
proportional to
Ri2
f (eq ) ).
Figures 4.12 and 4.13 summarize the performance of the algorithm. In brief,
the error in the measured thermodynamic quantities is approximately equal to the signal/noise ratio
(e.g. standard deviation of Gaussian noise divided by the height of the cloud function) for small
errors, with ∆T /T growing as q becomes increasingly negative.
We also tested the algorithm on real data, confirming its speed and reliability (4.14).
More tests will be done as necessary.
40
Figure 4.10: Histogram (200 bins) of the residuals shown in the rightmost panel of Figure 4.9.
Note that the histogram is nearly Gaussian (as expected from the imaging shot noise), while the
RMS of the noise is 0.023 optical density. Combining this value with the Monte Carlo simulations
for statistical errors gives error estimates for the above fit parameters: of N = 29400 ± 1200 and
T /TF = 0.54 ± 0.01
41
Figure 4.11: In this plot, we plot the temperature (in terms of Fermi temperature at the centre of
a harmonic trap) as a function of the parameter q = βµ = log(z), where z is the fugacity. Limiting
behaviour: when q 0, T /TF → 6−1/3 e−q/3 while for q 0, T /TF → 1/q.
Figure 4.12: Four images of simulated fermion clouds as described in main text (A = 1, O =
0.1, x0 = 3, Rx = 10, y0 = 3, Ry = 15) with different amounts of Gaussian noise added: 0, 0.04, 0.08,
and 0.12 standard deviation noise in the four images, in that order from left to right.
42
Figure 4.13: Results of fitting noisy Fermi clouds with my algorithms. The plots show the RMS
relative error of the fitted parameters of the clouds as a function of the parameter q (x-axis) and of
the normalized signal to noise ratio of the simulated clouds (different lines).
43
Figure 4.14: The panels show different parameters obtained from fitting 330 experimental shots of
Fermi clouds taken on April 13, 2011. No background subtraction was used, so the background
noise level is likely higher than what would be expected.
44
Bibliography
[1] Gyu-Boong Jo, Ye-Ryoung Lee, Jae-Hoon Choi, Caleb A. Christensen, Tony H. Kim, Joseph H.
Thywissen, David E. Pritchard, and Wolfgang Ketterle. Itinerant Ferromagnetism in a Fermi
Gas of Ultracold Atoms. Science, 325(5947):1521–1524, 2009.
[2] A. Sommer, M. Ku, G. Roati, and M. W. Zwierlein. Universal Spin Transport in a Strongly
Interacting Fermi Gas. Nature, 472:201–204, 2011.
[3] Alessio Recati and Sandro Stringari. Spin fluctuations, susceptibility, and the dipole oscillation
of a nearly ferromagnetic fermi gas. Phys. Rev. Lett., 106(8):080402, Feb 2011.
[4] Hui Zhai. Correlated versus ferromagnetic state in repulsively interacting two-component fermi
gases. Phys. Rev. A, 80(5):051605, Nov 2009.
[5] Shizhong Zhang and Tin-Lun Ho. Atom loss maximum in ultra-cold fermi gases. ArXiv e-prints,
2011.
[6] Xiaoling Cui and Hui Zhai. Stability of a fully magnetized ferromagnetic state in repulsively
interacting ultracold fermi gases. Phys. Rev. A, 81(4):041602, Apr 2010.
[7] David Pekker, Mehrtash Babadi, Rajdeep Sensarma, Nikolaj Zinner, Lode Pollet, Martin W.
Zwierlein, and Eugene Demler. Competition between pairing and ferromagnetic instabilities in
ultracold fermi gases near feshbach resonances. Phys. Rev. Lett., 106(5):050402, Feb 2011.
[8] S. Pilati, G. Bertaina, S. Giorgini, and M. Troyer. Itinerant Ferromagnetism of a Repulsive
Atomic Fermi Gas: A Quantum Monte Carlo Study. Phys. Rev. Lett., 105(3):030405, 2010.
[9] Lindsay J. LeBlanc. Exploring many-body physics with ultracold atoms. PhD thesis, University
of Toronto, 2010.
45
[10] L. J. LeBlanc, J. H. Thywissen, A. A. Burkov, and A. Paramekanti. Repulsive fermi gas in a
harmonic trap: Ferromagnetism and spin textures. Phys. Rev. A, 80:013607, 2009.
[11] R. A. Duine and A. H. MacDonald. Itinerant ferromagnetism in an ultracold atom fermi gas.
Phys. Rev. Lett., 95(23):230403, Nov 2005.
[12] Cheng Chin, Rudolf Grimm, Paul Julienne, and Eite Tiesinga. Feshbach resonances in ultracold
gases. Rev. Mod. Phys., 82(2):1225–1286, Apr 2010.
[13] R. A. Duine, Marco Polini, H. T. C. Stoof, and G. Vignale. Spin drag in an ultracold fermi
gas on the verge of ferromagnetic instability. Phys. Rev. Lett., 104(22):220403, Jun 2010.
[14] Wolfgang Ketterle and Martin W. Zwierlein. Making, probing and understanding ultracold
fermi gases. In M. Inguscio, W. Ketterle, and C. Salomon, editors, Proceedings of the International School of Physics Enrico Fermi, Course CLXIV, Varenna, 20 - 30 June 2006. IOS
Press, Amsterdam, 2008.
[15] Marcius H. T. Extavour. Fermions and Bosons on an Atom Chip. PhD thesis, University of
Toronto, 2009.
[16] Milton Abramowitz and Irene A. Stegun. Handbook of mathematical functions with formulas,
graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathematics Series. United States National Bureau of Standards, 1964.
46
Appendix A
Data fitting with MATLAB
Dimensional reduction is the essential goal of data analysis. In scientific experiments, the data
generated by the experiment are often not directly the quantity that one desires to investigate, and
the experiment generates much more data than the final number of parameters that one wishes
to extract. The collection of experimental measurements µi can be seen as a vector in a very
large-dimensional vector space V ; the goal of data analysis then is to extract the dimensions in
this vector space where the essential physics are contained, as opposed to dimensions containing
noise. In turn, the noise-containing dimensions can be classified into different sources and kinds
of noise, with such a classification aiding in the interpretation of the experimental results and in
improving the apparatus. The general technique of seeking the “most significant” dimensions in a
set of data is known as principal component analysis. In this approach to data analysis, one assigns
the greatest significance to the dimensions that can explain the greatest proportion of the variance
in the data. Fortunately, one does not need to resort to this method in physics because one already
has a model to try to explain the acquired data, and one can then extract meaningful parameters
based on physical arguments.
As a simple example, consider a measurement of the value of gravitational acceleration using
a pendulum. The raw measurements would be the time delays ti between subsequent moments
when the pendulum passed through its lowest point (acquired for example using an electronic gate).
Suppose several hundred measurements are performed on several separate days. A first estimate of g
p
can be found by simply averaging all the values of ti and using the formula t = 2π g/l . Averaging
is equivalent to fitting all the data with a function of the form fi = const and is an extremely
effective dimensional reduction tool, taking any data dimension and turning it into a single number.
It is also frequently too heavy-handed of an approach, eliminating any internal structure and other
47
information in the data.
A more sophisticated analysis might try to correlate the observed intervals with the temperature
of the pendulum, the time of day, the pressure in the room, and any of a host of other similar
experimental parameters, in an attempt to explain some of the variance between different data
points. A model must be created; for example, the clock slows down at night because it gets colder,
while the oscillations of the pendulum vary with air pressure because of differences in buoyancy. We
p
would then fit the data t to a function of the form t = 2π (g + A(P ))/l ∗ B(T ) where A(P ) and
B(T ) reflect our beliefs regarding the influence of pressure and temperature on the experiment. If
the model is correct, then it will be possible to ascribe a large proportion of the variance between
different measurements to physical influences on the system and hence produce a more accurate
estimate of the acceleration g than a simple averaging ever could.
Ultimately, analysis of any data reflects our trust in certain assumptions.
In ultracold atoms physics, one faces a similar situation. The raw input data are images of
atom clouds, which contain copious quantities of data. However, much of the data in the image
are imperfect, due to distortions and imperfections in the imaging system. As a result, one fits the
observed data to some model in order to extract the physically meaningful parameters from the
image, with the most basic model being one of the ideal gas functions derived in Appendices C and
D. At the cost of a great reduction of the data dimension, one can obtain robust estimates of the
number and temperature of the atoms. By assuming that successive experimental runs are not too
different, one can then average different experimental run results, eventually making a conclusion
about the behaviour of quantum gases.
A.1
Functions and tips
MATLAB comes with a rich library of fitting functions; indeed, the library is so rich that it’s
sometimes not clear which function to use. Nonetheless, three of the functions: lscov, lsqnonlin,
and fminsearch are generally sufficient for any fitting purpose, and offer the best performance for
their respective class of problems. These functions are also useful because they are all built into
MATLAB (as opposed to being in a toolbox).
48
A.1.1
lscov
lscov is an appropriate function for performing any form of multilinear regression; that is, one
assumes the data consist of a vector z that depends linearly on a set of vectors ai of the same
P
length: z(j) = i ai (j)xi where xi are the fit coefficients. The syntax for using the function is:
x = lscov(A,z,w)}
The vector x minimizes the quantity
P
i wi (zj
−
P
i ai (j)xi )
2
, where w is then a weight vector
for the fit, with points with a larger w having a greater impact on the fit.
Generally, the most useful application of lscov is for fast (weighted) polynomial fitting. For
example, if one wants to fit a model y(x) = ax3 + bx3 + cx + d, then one simply computes the
columns of A equal to the vector x to different powers, followed by an application of lscov:
A(:,1) = ones(length(x)); A(:,2) = x; A(:,2) = x.^2; A(:,2) = x.^3;
x = lscov(A,z,w)
The main advantage of lscov is its extreme speed; if z contains 10000 distinct data points
and A contains 30 possible parameters that could explain the observed data, the function takes an
average of 0.05s to run.
A.1.2
lsqnonlin
lsqnonlin is the workhorse used in data-fitting, primarily because of its balance of speed and
flexibility. lsqnonlin requires as one of its inputs a reference to a function f~(~
p), which it then calls
P 2
repeatedly, seeking to minimize i fi (p) in the parameter space allowed for the parameter vector
p~. This kind of function is called a function function in MATLAB, because of its reliance on and
interaction with an external, user-provided function. The syntax for using the function is:
p = lsqnonlin(f,p0)
where f is the function, p0 are the initial parameter guesses (mandatory) and p are the optimized
parameter values. For example, if one wants to fit an exponential function of the form Ae−Bx to a
data set, then the function f would have to look like
function dy = f(p)
A = p(1);
B = p(2);
49
dy = y - A * exp(-b*x);
end
note that the vectors x and y must already be defined in the function containing the function
f ; lsqnonlin will not pass these vectors to the optimization function.
A more involved example of the use of this function is given in section A.2 below, where we use
the function repeatedly to fit the shape of a cloud of fermions after time-of-flight expansion.
A.1.3
fminsearch
When all else fails, one can rely on fminsearch. This is an unconstrained, no-assumption search
for the minimum of a function. In general, it is significantly slower than lsqnonlin, but can be
used for much more complex problems.
A.1.4
Tips and tricks
In several years of working with MATLAB and especially fitting data with it, I have learned about
a few idiosyncrasies, limitations, and other quirks of MATLAB that I summarize below.
1. Never use loops.
MATLAB gives the user almost unlimited flexibility in the usage of syntax, with implicit
type checking and conversion throughout the code. This flexibility comes with a price: each
operation carries a significant overhead. The only real way to avoid this problem is to vectorize
as many operations as possible in MATLAB; in other words, avoiding loops in favour of using
the built in vectorized functions.
2. Use the MATLAB profiler.
When your code in MATLAB runs slowly, it can almost always be improved, because we as
scientists code for functionality and not speed. MATLAB’s profiler tool (enabled using the
command profile on monitors your code while it runs (so overall, the execution becomes
slightly slower), but then it produces a report (viewable using the command profile viewer)
which will show you exactly which parts of the code take the greatest proportion of the
execution time. In most first-iteration unoptimized code, most of the execution time is taken
50
by a single operation that is very deeply nested and is called many times over (see above note
on for loops). For example, in my code for Fermi fitting, there is a built-in polylog evaluation
function in order to avoid using the “external” function I’ve written. What this achieves is
that the code only needs to load the interpolation tables once per image and not once per
fitting round (of which there can be a couple hundred). This massively reduces the fitting
time (a factor of about 3).
3. Do not attempt a fit with more than 3 or 4 parameters.
In general, as the number of a MATLAB function function (i.e. numerical optimization
routine) has to play with grows, the probability of the fit succeeding decreases, and the time
the fit will take grows exponentially. Fitting is generally best separated into stages, first fitting
a simpler model with fewer adjustable parameters, and then using
4. Use functions.
The introduction of function-based programming in the 60’s and 70’s was an enormous step
from the line number/GOTO approach to moving around the code used previously. Use this
power! One useful function to have is a function which performs nice plotting (i.e. sets the
axis font size, grid lines, and any other options that you like) of a simple data set, so if you
want to print the graph, you don’t have to adjust the settings manually. More importantly,
when writing physics code, it is very useful to separate the code into functional chunks that
make sense; in my Fermi fitting code below, I separate the code into three layers: evaluation
of the polylog function using Mathematica/MATLAB by interpolation; fitting of an arbitrary
(physically meaningless) image using a Fermi cloud shape; and finally, assigning physical
meaning to the fit parameters. This way, it is much easier to keep track of what the code is
doing at any given time.
5. Use the MATLAB diary.
Sometimes, you come up with a brilliant idea, try it out in the command line interface of
MATLAB, and then go on and work on other stuff for a while. Later, you think and try to
recall what exactly it was that you did. A useful feature of MATLAB for making sure all
your commands (and their results) are documented is the diary feature of MATLAB. The
syntax is simple: diary(’filename’) will save all your commands and the output MATLAB
produces into the specified file.
51
A.2
Fermi cloud fitting
As an instructive example, I include below my code for fitting two-dimensional images of particles
obeying Fermi-Dirac statistics in a harmonic trap. The physical and mathematical groundwork for
this code is discussed in Appendix D. We assume (D.12) as the starting point and proceed from
there.
A.2.1
Li2 2d physical.m
The first function is the high-level “physical” fitter. One inputs the image and background, as well
as experimental parameters in the struct expv (which is then unpacked in a self-explanatory way).
The mathematical fit is performed by
1
%img is an absorption image; bg is a background image; exp are experimental
2
%parameters in a struct, with fields: lambda, tof, pxarea, waxis,wx,wy,m
3
4
function [paramsout,R,fitfun] = Li2 2d physical (img,bg,expv)
5
6
function u = f(p)
7
u = ((1+p)/p)*log(1+p);
8
if p == Inf
u = q;
9
10
end
11
if p < 1e−10
12
u = 1;
end
13
14
end
15
16
k b = 1.3806504e−23;
17
hbar = 6.62606896e−34/(2*pi);
18
19
20
lambda = expv.lambda;
21
tof = expv.tof;
22
pxedge = expv.pxedge;
23
pxarea = pxedgeˆ2;
24
waxis = expv.waxis;
25
wx = expv.wx;
52
26
wy = expv.wy;
27
m = expv.m;
28
29
30
wbar = (waxis*wx*wy)ˆ(1/3);
31
32
od = −log(img./bg);
33
34
%Assuming resonant imaging. For off−resonant imaging, will need to add an
35
%appropriate function.
36
sigma0 = 3*lambdaˆ2/(2*pi)*0.7;
37
38
odperatom = pxarea/sigma0;
39
40
%The "integrated" atom count
41
Ninteg = odperatom*sum(sum(od));
42
43
[Y,X] = meshgrid(1:size(od,2),1:size(od,1));
44
45
[paramsout,R,fitfun] = Li2 2d fit(X,Y,od);
46
47
q = paramsout.q;
48
49
lifactor = Lis(q,3)*Lis(q,0)/(Lis(q,2)*Lis(q,1));
50
51
Nfit = odperatom*pi*paramsout.amp*paramsout.sigmax*paramsout.sigmay*lifactor;
52
53
paramsout.ninteg = Ninteg;
54
paramsout.nfit = Nfit;
55
paramsout.Tf n = (hbar/k b) * wbar * (6*Nfit)ˆ(1/3);
56
57
bx = sqrt(1+wxˆ2*tofˆ2);
58
by = sqrt(1+wyˆ2*tofˆ2);
59
60
Rx = paramsout.sigmax * pxedge;
61
Ry = paramsout.sigmay * pxedge;
62
63
paramsout.T x = (1/k b)*(1/2) * m*wxˆ2 * (Rxˆ2 / bxˆ2) * 1/f(exp(q));
64
paramsout.T y = (1/k b)*(1/2) * m*wyˆ2 * (Ryˆ2 / byˆ2) * 1/f(exp(q));
53
65
66
paramsout.ToTf = qtoDegen(q);
67
68
paramsout.Tf x = paramsout.T x/paramsout.ToTf;
69
paramsout.Tf y = paramsout.T y/paramsout.ToTf;
70
71
end
The first thing to note about Li2 2d physical.m is the complete decoupling of the mathematical
model from the physical interpretation of the fit results: the function
A.2.2
Li2 2d fit.m
The only remaining task is to actually fit the optical density image, which is conveniently performed
in Li2 2d fit.m.
1
%params:[ampl offset q=\mu b x0 R x y0 R y]
2
%x are the coodrinates of the points
3
function [paramsout,R,fitfun] = Li2 2d fit(x,y,z)
4
5
q=0;
6
tic;
7
LIDATAORIG = Liload(2);
8
LIDATAMIN = min(LIDATAORIG(:,1));
9
LIDATAMAX = max(LIDATAORIG(:,1));
10
11
LIDATA = LIDATAORIG;
12
13
function u = f(p)
14
u = ((1+p)/p)*log(1+p);
15
if p == Inf
u = q;
16
17
end
18
if p < 1e−10
19
u = 1;
end
20
21
end
22
23
function [L2] = LisINT(x)
54
24
L2 =interp1(LIDATA(:,1),LIDATA(:,2),x,'spline');
25
u = find(x<LIDATAMIN);
26
L2(u) = exp(x(u));
27
u = find(x>LIDATAMAX);
28
L2(u) = x(u).ˆ(2) / gamma(3);
29
end
30
31
function [L] = dist(params)
32
a = params(1);
33
offs = params(2);
34
35
q = params(3);
36
%q = exp(params(3));
37
%q = params(3)ˆ11
38
x0 = params(4);
39
Rx = params(5);
40
y0 = params(6);
41
Ry = params(7);
42
43
v = q − ((x−x0).ˆ2/(Rxˆ2)+(y−y0).ˆ2/(Ryˆ2))*f(exp(q));
44
L = a*LisINT(v)/LisINT(q)+offs;
45
L = L − z;
46
end
47
48
function [L] = distVARQ(params)
49
a = paramsl(1);
50
offs = paramsl(2);
51
52
q = params;
53
%q = exp(params(3));
54
%q = params(3)ˆ11
55
x0 = paramsl(4);
56
Rx = paramsl(5);
57
y0 = paramsl(6);
58
Ry = paramsl(7);
59
60
v = q − ((x−x0).ˆ2/(Rxˆ2)+(y−y0).ˆ2/(Ryˆ2))*f(exp(q));
61
L = a*LisINT(v)/LisINT(q)+offs;
62
L = L − z;
55
63
end
64
65
[x1,z1] = BinMean(x,z,size(z,2));
66
px = Li5 2 1d fit(x1,z1);
67
68
[y1,z1] = BinMean(y,z,size(z,1));
69
py = Li5 2 1d fit(y1,z1);
70
71
%params:[ampl offset q=\mu b x0 R x y0 R y]
72
73
a2 = max(max(z));
74
o2 = (px.offs+py.offs)/2;
75
%q2 = log(sqrt(px.q*py.q));
76
q2 = (px.q+py.q)/2;
77
x02 = px.mean;
78
Rx2 = px.sigma;
79
y02 = py.mean;
80
Ry2 = py.sigma;
81
82
83
paramsl = [a2 o2 q2 x02 Rx2 y02 Ry2];
84
85
opts = optimset('Display','none','TolX',1e−5,'TolFun',1e−7,...
86
'MaxFunEvals',2e2,'Maxiter',2e1,'LargeScale','on','PrecondBandWidth',10);
87
88
[paramsl,R] = lsqnonlin(@dist,paramsl,[],[],opts);
89
90
91
%paramsl(3) = exp(paramsl(3));
92
%paramsl(3) = paramsl(3)ˆ11;
93
94
paramsout.amp = paramsl(1);
95
paramsout.offs = paramsl(2);
96
paramsout.q = paramsl(3);
97
paramsout.meanx = paramsl(4);
98
paramsout.sigmax = paramsl(5);
99
paramsout.meany = paramsl(6);
100
paramsout.sigmay = paramsl(7);
101
56
102
fitfun = Li2 2d(paramsl,x,y);
103
paramsout.fittime = toc;
104
105
end
This function has several features to heavily optimize its performance. It has a built-in polylog
interpolator (in order to avoid loading the interpolation tables more than once). It first uses two onedimensional Gaussian fits to approximately localize the cloud. Then, two one-dimensional polylog
fits are used to estimate the value of q = log(z). Finally, the real problem is attacked, with the full
two-dimensional fit being performed. Each successive improves on the results of the previous one,
and simplifies the job of the next one.
A.2.3
Mathematica polylog pre-evaluation code
We’ve almost reached the end of digging into the code; in order to speed up the computation of the
polylogarithm functions by MATLAB, we use Mathematica to pre-compute a large table of values
of the special polylog functions I defined as fn (x) = −Lin (−ex ):
set = Join[Table[y, {y, -10, 20, 1/10}], Table[y, {y, 21, 50, 1}],
Table[Exp[y], {y, 4, 7, 1/10}]];
For[i = 0, i < 5, i = i + 1/2,
Export[Directory[] <> "\\" <> ToString[N[i, 2]] <> "f.tsv",
Block[{$MaxExtraPrecision = 1500},
Table[{N[x, 30], N[Re[-PolyLog[i, -Exp[x]]], 30]}, {x, set}]]];
]
The reason for using the special polylog function is that this is the form in which one encounters
the polylogs in the fitting equations such as (D.13). A MATLAB script loads the files produced by
this Mathematica script (with file names like “3.0f.tsv”) and interpolates the loaded values to find
estimates of fn at intermediate values of x. We can achieve very good numerical accuracy (less than
10−8 error) without the extreme computational burden of evaluating the polylogarithm functions
explicitly. The Mathematica script which generated the data files takes about 50-100 times longer
than the MATLAB script to evaluate each value of fx (including the time MATLAB takes to load
the data point). Moreover, since MATLAB is very fast at evaluating vectorized functions, finding
these special polylog functions for a large number of points carries a tiny overhead (i.e. the time
taken to interpolate the polylog values is essentially constant for less than 100000 points).
57
Appendix B
Polylogarithms
Integrating Bose-Einstein and Fermi-Dirac functions often leads one to the polylogarithm functions
Lin (z) defined for arbitrary complex n and z:
∞
X
zk
kn
Lin (z) =
(B.1)
k=1
This series converges only for |z| < 1, but can be continued for arbitrary complex z using the
technique of analytic continuation. These continuations produce both the integrals of Bose-Einstein
and Fermi-Dirac distributions:
1
Lin (z) =
Γ(n)
where Γ(n) =
R∞
0
Z
0
∞
tn−1
et
z
−1
dt
(B.2)
tn−1 e−t dt is the Gamma function, which equals (n − 1)! for positive integer n.
If z > 0, this expression is equivalent to a Bose-Einstein type integral, while if z < 0, this expression
gives the integral of a Fermi-Dirac distribution. Other useful properties of the polylog functions
include a recursion relation:
z
∂Lin (z)
= Lin−1 (z)
∂z
(B.3)
as well as a special case for n = 1 (which justifies the polylogarithm name):
Li1 (z) =
∞
X
zk
k=1
k
58
= − ln(1 − z)
(B.4)
If n ≤ 0, then the above two combine to give all the polylog functions with integer n (note that
the integral definition (B.2) is not valid for <(n) < 0):
Li−n (z) =
∂
z
∂z
n
z
1−z
(B.5)
Finally, [14, Eq. 22] is a very useful polylogarithm integral:
Z
∞
2
Lin (ze−x )dx =
√
πLin+1/2 (z)
(B.6)
−∞
Figure B.1: Plots of −Lin (−x) for all the orders n that will be relevant for Fermi fitting. Note
the asymptotic behaviour of all the functions: −Lin (−x) ≈ x for |x| 1, while −Lin (−x) ≈
1
Γ(n+1)
lnn x for z 1.
As an example application of these properties, we integrate Eq. (D.3):
59
1
n(r) = 3
h
Z
β= k 1 t 4π
Z
B
1
p2
+V
( 2m
e
dp
(r)−µ)/kB T
∞
+1
p2
dp
h3 0 e
(r)−µ)
+1
Z
2 √
t= βp
2πm3/2 ∞
t1/2
2m 4
=
dt
h3 β 3/2
et+β(V (r)−µ) + 1
0
Z
t1/2
2m 3/2 ∞
= − 2π
dt
et
h2 β
−
1
0
−β(V
(r)−µ)
−e
3/2 3
2m
−β(V (r)−µ)
Γ
Li
= − 2π
−e
3/2
h2 β
2
√
Γ( 32 )= 2π
kB T m 3/2
−β(V (r)−µ)
=
−
Li
−e
3/2
2πh̄2
=
p2
β( 2m
+V
(B.7)
By recalling that the de Broglie wavelength of a thermal particle equals λdB =
q
2πh̄2
mkB T ,
we
have:
n(r) = −
1
λdB
3
Li3/2 −e−β(V (r)−µ)
(B.8)
which is the same as [14, Eq. 20].
Another useful and instructive integral is the time-of-flight result for a Fermi gas in a harmonic
potential given in (D.9). The integral to derive that result is
with Hamiltonian H =
p2
2m
Z
dp
1
n(r, t) = 3
pt
h
e(H(r− m ,p)−µ)/kB T + 1
P
+ 12 m i wi2 ri2 . Plugging in gives:
1
n(r, t) = 3
h
Z
(B.9)
dp
e
1
β( 2m
P
2
2
2
i pi +ωi (ri −pi t) −µ)
(B.10)
+1
q
ω2 r t
Introduce a new variable q defined by qi = pi 1 + ωi2 t2 − √ i i 2 2 , and noting that dqi =
1+ωi t
q
ωi4 ri2 t2
ωi2 ri2
2
dpi 1 + ωi2 t2 and p2i +ωi2 (ri −pi t)2 = (1+ωi2 t2 )p2i −2ωi2 ri pi t+ωi2 ri2 = qi2 +ωi2 ri2 − 1+ω
2 t2 = qi + 1+ω 2 t2 ,
i
i
we have
Z
1
1 Y
q
n(r, t) = 3
h
1 + ωi2 t2
i
60
dq
e
1
β( 2m
P
ω2 r2
2
i i
i qi + 1+ω 2 t2 −µ)
i
(B.11)
+1
By the integral in (B.7), we immediately obtain
1 Y
1
q
n(r, t) = − 3
Li3/2
λdB i
1 + ωi2 t2
β(µ− 12
−e
P
ωi2 ri2
i 1+ω 2 t2 )
i
!
(B.12)
which is exactly (D.9), as claimed.
Now we can integrate this expression along z using (B.6) to produce the column density function:
√
Z∞
P ωi2 ri2 !
β(µ− 12 i
)
u=zwz β/2(1+ωz2 t2 )
1 Y
1
1+ω 2 t2
i
q
nz (x, y, t) = − 3
Li3/2 −e
dz
=
λdB i
1 + ωi2 t2 −∞
!
p
Z∞
ω2 y2
ω 2 x2
β(µ− 21 x 2 2 − 12 y 2 2 ) −u2
β/2
1
1
1
1+ωx t
1+ωy t
p
q
Li3/2 −e
=− 3
e
du =
λdB ωz
1 + ωx2 t2 1 + ω 2 t2
y
−∞
p
πβ/2
1
1
1
p
q
Li2
=− 3
λdB ωz
1 + ωx2 t2 1 + ω 2 t2
y
61
−e
β(µ− 21
2 2
2 x2
ωx
1 ωy y
2 t2 − 2 1+ω 2 t2 )
1+ωx
y
!
(B.13)
Appendix C
Fitting functions for classical gases
In this appendix, we derive appropriate time-of-flight fitting functions for atoms trapped in different
potentials. Of particular interest are potentials of the form u(r) ∝ |r|, e.g. the potential found in a
quadrupole trap. As a simplifying assumption, we assume the atoms to be hot enough to be treated
classically, e.g. we ignore considerations of Fermi-Dirac and Bose-Einstein statistics.
In this case, the phase space density f of the atoms is
f (r, p) =
eβµ −βH(r,p)
1 −(H(r,p)−µ)/kB T
e
=
e
h3
h3
(C.1)
where as usual H is the Hamiltonian describing the atoms. After a time of flight t, neglecting
gravity (which accelerated all atoms equally) and making the ballistic approximation, we have
f (r, p, t) =
eβµ −βH(r− t p,p)
m
e
h3
(C.2)
At every position, we integrate over the momentum phase space in order to obtain the spatial
density:
n(r, t) =
eβµ
h3
Z
t
e−βH(r− m p,p) dp
(C.3)
This is the most general expression for an arbitrary Hamiltonian H. If the Hamiltonian is of
the usual form for a particle H =
p2
2m
+ V (r), then (C.3) becomes
eβµ
n(r, t) = 3
h
Z
−β
e
62
p2
+V
2m
t
(r− m
p)
dp
(C.4)
When t = 0, we obtain the density of trapped atoms in equilibrium:
eβ(µ−V (r))
n(r, 0) =
h3
Z
e
−β
p2
2m
Z
β 2
eβ(µ−V (r)) Y
2mπ 3/2 β(µ−V (r))
− 2m
pi
dp =
e
e
dpi =
h3
βh2
(C.5)
i
Note that this calculation was only this simple because of the quadratic term in p found in the
Hamiltonian. Correspondingly, the integral in (C.4) will only be simple for quadratic (i.e. harmonic)
potentials.
C.1
Harmonic potential
First, let’s try to find the usual “expanding Gaussian cloud” result for a harmonic potential. In this
case,
1 X 2 2
V (r) = m
ωi ri ,
2
(C.6)
i
where ωi are the different frequencies of the trap. Substituting this expression into (C.5) gives:
eβµ
Z
−β
P
p2
+ 21 m i
2m
t
ωi2 (ri − m
pi )2
e
dp
h3
Z
1 P 2
t
2
2
eβµ
e−β ( 2m i pi +(mωi ) (ri − m pi ) ) dp
= 3
h
Z
β P 2
2
eβµ
= 3
e− 2m ( i pi +(mωi ri −tωi pi ) ) dp
h
n(r, t) =
(C.7)
Noting that the integrals are independent in each dimension of p, and recalling that for a
Gaussian integral,
Z
−ax2 +bx+c
e
r
=
π b2 /4a+c
e
,
a
(C.8)
we find
2
#
"r
(tωi mri )2
β
2 ω2 r2
− 2m
+m
eβµ Y
π
2
i i
1+t2 ω
i
n(r, t) = 3
e
2ω2
h
1
+
t
i
i
2 2
#
"r
m ωi
β
− 2m
ri2
eβµ Y
π
2
2
1+t ω
i
= 3
e
2ω2
h
1
+
t
i
i
q
This is nothing but the original cloud shape scaled by the factors ri → 1 + t2 ωi2 ri .
63
(C.9)
C.2
Quadrupole potential
Next, let’s calculate n(r, t) for a quadrupole trap-type potential,
sX
V (r) =
a2i ri2 .
(C.10)
i
This expression arises because near a zero of magnetic field, we can write Bi =
P
P ∂Bi
∂rj rj
=
dBij rj . From Gauss’s law, we must have T r(dB) = 0, while Ampere’s law (note there are
neither current density nor changing electric fields present) gives dBij − dBji = 0, i.e. dB is
symmetric. Consequently, we can always select our axes xi in such a way as to diagonalize dB,
giving Bi = dBii ri ≡ ai ri . The values of ai are “potential strengths” corresponding to the gradients
of the magnetic fields near the zero magnetic field region in the quadrupole trap. Finally, the
qP
qP
2 =
2 2
B
potential energy of a magnetic dipole in such a trap is proportional to |B| =
i i
i ai ri ,
giving (C.10).
As usual, we plug (C.10) into (C.4) giving
eβµ
n(r, t) = 3
h
Z
e
p2
+
2m
−β
√P
2
2
i ai (ri −tpi /m)
dp
(C.11)
Now we rescale our axes to make the problem more symmetric, defining si = ai (ri − tpi /m) and
qi = ai ri . Note this means dsi = −ai tdpi /m and pi = −(si /ai − ri )m/t. Then, we have
m3 eβµ
n(r, t) = 3 3 Q
t h
i ai
Z
−β
e
m
2t2
P
i
((si −qi )/ai )2 +|s|
ds
(C.12)
I spent a long time trying to integrate this expression both in terms of elementary and nonelementary functions to no avail. As a last resort, I created my own function. We define a new
special function called the Braverman Q function as:
Z
Q(q, a) =
e−(
P
i
((si −qi )/ai )2 +|s|)
ds
(C.13)
While this definition is essentially sweeping all the difficult parts of the calculation under the
table, the Q function is defined in terms of dimensionless variables and hence it is easier to discuss
its limiting behaviour. In addition, we can pull the same trick as with other special functions: precalculation followed by interpolation as a method of rapid evaluation. Physically, the Q function is
simple to interpret: it is the complete spatial integral of an offset elliptical 3-dimensional Gaussian
after being suppressed by an e−|r| factor. Some limiting behaviours include when ai qi ∀i in which
64
case the Gaussian is strongly localized in a region where the exponential term does not vary very
much. In this case
Z
Q(q, a) =
e
−(
P
i
((si −qi )/ai )2 +|s|)
ds ≈ e
−|q|
Z
e −(
P
i
((si −qi )/ai )2 )
3
ds = π 2
Y
ai e−|q|
(C.14)
i
Conversely, if ai 1 and ai qi , the Gaussian is very broad and nearly centered at the origin.
In this case,
Z
Q(q, a) =
e
P
−( i ((si −qi )/ai )2 +|s|)
Z
ds ≈
e−(|s|) ds = 8π
(C.15)
In terms of the Q function, the distribution becomes
m3 eβµ
n(r, t) = 3 3 Q
t h
i ai
Z
m3 eβµ
−
e
−β
=
t3 h3 β
C.3
Q
3
Z
P
e
i ai
i
m
2t2
P
i
((si −qi )/ai )2 +|s|
ui −qi β
√
ai βt 2/m
m3 eβµ
ds = 3 3 3 Q
t h β
i ai
Z
u=sβ
−
e
m
2t2
P
i
((ui /β−qi )/ai )2 +|u|
du =
!
2
+|u|
du =
p
m3 eβµ
Q
Q
qβ,
aβt
2/m
t3 h3 β 3 i ai
(C.16)
Linear potential
Now let’s try to find n(r, t) for a linear potential, i.e. V (r) =
P
i ai |xi |.
This was my mistaken
attempt to solve the time-of-flight problem for a quadrupole trap, but this form of the potential is
incorrect (e.g. it does not equal (C.10) to any approximation). Nonetheless, this may be a useful
potential some day. Since the potential is first-order, we can pick the axes xi arbitrarily.
The above integral (C.4) now becomes
n(r, t) =
eβµ
h3
Z
e
−β
P
p2
+ i
2m
t
ai |ri − m
pi |
dp =
eβµ

Y
h3
Z∞
e

i
−β
p2
i +a |r − t p |
i i m i
2m

dpi 
(C.17)
−∞
We have
Z∞
−β
e
−∞
p2
i +a |r − t p |
i i m i
2m
mr
Z i /t
dpi =
−β
e
p2
i +a (r − t p )
i i m i
2m
−∞
Z∞
dpi +
mri /t
65
−β
e
p2
i −a (r − t p )
i i m i
2m
dpi
(C.18)
We define the error function following the definition given in Abramowitz and Stegun [16]:
2
erf(x) ≡ √
π
Zx
2
e−t dt
(C.19)
0
Note erf(±∞) = ±1. Next, we play around with Gaussian integrals:
mr
Z i /t
e
−β
p2
i +a (r − t p )
i i m i
2m
dpi = e
−βai ri
−∞
=e
mr
Z i /t
β
2
e− 2m (pi −2ai tpi ) dpi
ui =pi −ai t
=
−∞
−βai ri
mriZ/t−ai t
e
β
− 2m
−∞
r
=
2m −βai
e
β
(
u2i −(ai t)2
√
ai t2
ri − 2m
mri /t−ai t
Z
ai t2
−βai ri − 2m
) du = e
i
β
− 2m
u2i
e
dui
vi =ui
√
β/2m
=
−∞
β/2m(mri /t−ai t)
Z
2
e−vi dvi =
−∞
r
=
πm −βai
e
2β
ai t2
ri − 2m
"r
erf
#
!
β mri
− ai t + 1
2m
t
(C.20)
Now the other one:
Z∞
−β
e
p2
i −a (r − t p )
i i m i
2m
βai ri
dpi = e
mri /t
=e
=
=
e
β
− 2m
2
ai t2
βai ri + 2m
(u2i −(ai t)2 ) du = e
i
mri /t+ai t
ai t2
2m βai ri + 2m
β
r
β
e− 2m (pi +2ai tpi ) dpi
ui =pi +ai t
=
mri /t
Z∞
βai ri
r
Z∞
πm βai
e
2β
ai t2
ri + 2m
β
2
e− 2m ui dui
vi =ui
√
β/2m
=
mri /t+ai t
Z∞
2
e−vi dvi =
e
√
Z∞
β/2m(mri /t+ai t)
"r
1 − erf
β mri
+ ai t
2m
t
Finally,
66
#!
(C.21)
Z∞
p2
i +a |r − t p |
i i m i
2m
#
!
"r
ai t2
πm −βai ri − 2m
β mri
e
dpi =
e
erf
− ai t + 1 +
2β
2m
t
−∞
"r
#!
r
ai t2
πm βai ri + 2m
β mri
+
1 − erf
e
+ ai t
=
2β
2m
t
"
"r
#
!
"r
#!#
r
mr
πm β a2i t2 −βai ri
β mri
β
i
e 2m e
erf
− ai t + 1 + eβai ri 1 − erf
+ ai t
=
2β
2m
t
2m
t
−β
r
(C.22)
Therefore,
n(r, t) =
e
2 t2
βµ+ a2m
eβµ
h3

Y
Z∞
e

i
πm
2βh2
−β
p2
i +a |r − t p |
i i m i
2m

dpi  =
−∞
3 Y "
2
"r
e−βai ri
erf
i
#
!
β mri
− ai t + 1 + eβai ri
2m
t
"r
1 − erf
β mri
+ ai t
2m
t
(C.23)
√
2
As a sanity check, we plug in limiting values. Note that for x 1, erf(x) ≈ 1 − e−x /(x π).
Then, for t 1,
"r
erf


#
βmr 2
− 2i
2t
β mri
e

± ai t ≈ sgn(ri ) 1 − q
2m
t
mβπ ri
2
(C.24)
t
Omitting small terms, we have
"r
e−βai ri
erf
#
!
β mri
− ai t + 1 + eβai ri
2m
t
"r
1 − erf
β mri
+ ai t
2m
t
≈ e−βai ri (1 + sgn(ri )) + eβai ri (1 − sgn(ri )) = 2e−βai |ri |
#!
≈
(C.25)
Hence,
βµ
n(r, t 1) ≈ e
πm
2βh2
3 Y
2
2e
i
which is the same as (C.5) (astonishingly!)
67
−βai |ri |
β(µ−V (r))
=e
2πm
βh2
3
2
(C.26)
#!#
Appendix D
Fitting functions for fermion gases
This discussion largely follows [14, 3.1.1]. In general, the occupation number for an eigenstate |ii of
a non-interacting Hamiltonian H for identical fermions, with energy Ei , is given by the Fermi-Dirac
distribution:
hni i =
1
(D.1)
e(Ei −µ)/kB T + 1
where µ is the chemical potential, which is determined by the total number of particles n in the
P
system by i hni i = n. Allowing ourselves to speak of a point {r, p} in phase space as representing
a possible state of the system (a semi-classical approach, validated by our use of the Thomas-Fermi
approximation), we write the phase space occupation density corresponding to this point in phase
space:
f (r, p) =
1
1
h3 e(H(r,p)−µ)/kB T + 1
(D.2)
where we note that the volume of a single quantum state in phase space is h3 . At any given
position in real space r, we simply integrate over all possible momentum states p to obtain the
spatial particle density:
1
n(r) = 3
h
Z
1
e(H(r,p)−µ)/kB T
+1
dp
(D.3)
We cannot proceed further without establishing the form of the Hamiltonian H. Since the
atoms are trapped, the potential energy term can be approximated as a harmonic potential near
68
the bottom of the trap:
H=
p2
p2
1
+ V (r) =
+ m ωx2 x2 + ωy2 y 2 + ωz2 z 2
2m
2m 2
(D.4)
By performing the integral in (D.3), we obtain a polylogarithm function (see Appendix B for a
brief review as well as the actual integration):
µ−V (r) 1
n(r) = − 3 Li3/2 −e kB T
λdB
(D.5)
In a harmonic potential, the distribution of atoms is then:
1
1
2 2
2 2
2 2
n(x, y, z, t) = − 3 Li3/2 − exp β(µ − m(ωx x + ωy y + ωz z ))
2
λdB
(D.6)
To determine the spatial distribution (and hence the recorded time of flight image), we would
have to model the (potentially extremely) complicated dynamics of interacting expanding fermions.
We can simplify the problem greatly by assuming the particles to be non-interacting and expanding
in the absence of an external potential, also known as the ballistic approximation. Then, after
expansion time t, a particle that initially (at time t0 = 0) was occupying the phase space cell
{r0 , p0 } will now occupy the cell {r0 +
tp0
m , p0 },
meaning that the phase space occupation density
is f (r, p, t) = f (r − pt
m , p). If we want to obtain the spatial density of the cloud after the time t, we
need to integrate over the initial phase space:
Z
dpf (r −
n(r, t) =
pt
, p)
m
(D.7)
Plugging in (D.2) gives
n(r, t) =
Evaluating this integral for H =
p2
2m
1
h3
Z
+
1
2m
dp
(H(r− pt
,p)−µ)/kB T
m
e
P
2 2
i wi ri
(D.8)
+1
(see Appendix B) gives [14, (43)]:
"
!!#
1 Y
1
1 X ωi2 ri2
q
n(r, t) = − 3
Li3/2 − exp β µ − m
2
λdB i
1 + ωi2 t2
1 + wi2 t2
i
q
We can define scaling parameters bi (t) = 1 + wi2 t2 in which case we evidently have
Y 1
x y z
n(x, y, z, t) =
n
, , ,0
bi (t)
bx by bz
i
69
(D.9)
(D.10)
The same result was obtained in the classical treatment in Appendix C. This is a rare example of
the qualitative behaviour of a classical and a quantum system behaving analogously: the statistics
of the fermions do not affect their expansion. After expansion for t 1/ωi , we find the 3-D
distribution of atoms in the special case of a harmonic trap is [14, (44)]:
n(x, y, z, t) = −
1
1 x2 + y 2 + z 2
1
Li
−
exp
β(µ
−
m
)
3/2
2
t2
λ3dB ωx ωy ωz t3
(D.11)
These expressions might be useful if we need to fit projections of our trap in a direction not
along a trap axis, although I am almost certain that the required rotational change of variables will
q
2πh̄2
render any necessary integrals intractable. λdB = mk
is the de Broglie wavelength and ωi are
BT
the trap frequencies in different directions.
By integrating (D.9) along one dimension (i.e. z) we obtain the column density of atoms in the
trap (see Appendix B):
1
nz (x, y, t) = − 3
λdB
p
πβ/2
1
1
p
q
Li2
2
2
ωz
1 + ωx t
1 + ω 2 t2
−e
2 x2
ω2 y2
ωx
− 21 y 2 2 )
2
2
1+ωx t
1+ωy t
β(µ− 12
!
(D.12)
y
The fitting functions for a non-interacting, expanding Fermi gas are given as [14, (65), (66)]:
n2D (x, y) = n2D,0
h
2
Li2 − exp q − Rx 2 +
x
y2
Ry2
i
f (eq )
(D.13)
Li2 (−eq )
in a 2-D image (projected along one trap axis), and
h
Li5/2 − exp q −
n1D (x) = n1D,0
Li5/2
i
x2
q
2 f (e )
Rx
(−eq )
(D.14)
for a 1-D image (projected along two axes, giving a 1-D function of position). In these equations,
we have β = 1/kB T and q = µβ where µ is the chemical potential of the gas determined by the
normalization conditions on the atom gas (total atom number conserved) and the potentials trapping
the atoms. The function f is given by f (x) =
1+x
x
log(1+x). The factors of Li2 (−eq ) and Li5/2 (−eq )
are included in order to give an easy physical interpretation to the parameters n2D,0 and n1D,0 which
are the peak atom densities in the 2-dimensional and 1-dimensional image respectively.
70
We can evaluate the parameters in (D.13):
p
πβ/2
1
1
1
p
q
n2D,0 = 3
λdB ωz
1 + ωx2 t2 1 + ωy2 t2
s
2f (eq )
Rx =
(1 + ωx2 t2 )
βωx2
(D.15)
The second part of the expression for Rx is simply the scaling factor for the free expansion of
a non-interacting Fermi gas. The first term contains the essential physics. When the gas is cold,
q
q
F
, which is the spatial limiting radius
q 0 and so f (eq ) ≈ q. In this limit, Rx = ω2µ2 = 2E
ω2
x
x
for a degenerate Fermi gas at 0 temperature: the gas fills up the trap up to the location where the
potential energy equals the chemical potential (which is also the Fermi energy at the centre of the
trap). At the other extreme when the gas hot, q 0, and we have f (eq ) ≈ 1 + eq ≈ 1, and so
q
Rx = 2kT
, which is the classical (thermal) cloud radius, where the potential energy of the atoms
ω2
x
in the cloud equals the total energy per degree of freedom ordained by the equipartition theorem
and where the occupation number drops classically by a factor of e.
Figure
Figure D.1: Leftmost image shows the one-dimensional Fermi fitting function (D.14) for different
values of q. Legend shows the corresponding ratios T /TF . The central image shows the twodimensional fitting function (D.13) for q = −10, T /TF = 15. Note the wide wings of the cloud,
corresponding to a Gaussian shape. The circle shows the thermal radius of the cloud. The rightmost
image shows a degenerate Fermi cloud (q = 10, T /TF = 0.1), fitting entirely within its Fermi radius
(circle in figure).
The advantage of using the parameters Rx and Ry to characterize the cloud lies in its direct
relationship to the size of the cloud at all temperatures, allowing a complete decoupling of the fitting
algorithm from any physical parameters of the system.
71
These fitting parameters allow us in the ideal case to determine the two independent quantities
that characterize an atom cloud in equilibrium: atom number and temperature. The degeneracy
parameter T /TF for the cloud (where TF is the Fermi temperature at the centre of the trap) is given
by
T
= [6Li3 (−eq )]−1/3
TF
(D.16)
The degeneracy parameter only depends on q and hence only on the shape of the trap. The
temperatures Ti of the cloud are given by:
1
1
R2
kB Ti = mωi2 i 2
2
bi (t) f (eq )
(D.17)
Note that this expression can produce several values of temperature for different imaging axes.
Possible reasons for such a discrepancy would most likely be associated with imaging system imperfections (e.g. optical distortions, or imaging not along a primary axis of the trap), as well as an
incorrect assumption of the non-interacting nature of the fermions as they expand from the trap.
72
Index
Anritsu MG37022A, 28
ballistic expansion, 67
degenerate Fermi gas, 6
dimensional reduction, 45
exchange interaction, 8
Feshbach resonance, 11
Ising model, 8
local density approximation, 10
polylogarithm, 56
Q function, 62
quantum simulation, 5
Stoner model, 6
73