Download Constraint Effective Potential of the Magnetization - Uwe

Document related concepts

Topological quantum field theory wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

EPR paradox wikipedia , lookup

Quantum chromodynamics wikipedia , lookup

T-symmetry wikipedia , lookup

Quantum state wikipedia , lookup

Coupled cluster wikipedia , lookup

Nitrogen-vacancy center wikipedia , lookup

Canonical quantization wikipedia , lookup

Bell's theorem wikipedia , lookup

History of quantum field theory wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Renormalization group wikipedia , lookup

Scalar field theory wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Spin (physics) wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Probability amplitude wikipedia , lookup

Ising model wikipedia , lookup

Ferromagnetism wikipedia , lookup

Transcript
Constraint Effective Potential of the Magnetization in
the Quantum XY Model
Masterarbeit
der Philosophisch-naturwissenschaftlichen Fakultät
der Universität Bern
vorgelegt von
Pascal Stebler
2011
Leiter der Arbeit:
Prof. Dr. Uwe-Jens Wiese
Albert Einstein Center for Fundamental Physics
Institut für Theoretische Physik, Universität Bern
The goal of this master thesis is to simulate the (2+1)-d spin 21 quantum XY model
with Monte Carlo methods and to compare the results with predictions of a lowenergy effective field theory. Therefore centered moments, probability distributions
of the magnetization, and the constraint effective potential have been compared. The
numerical results match the theoretical predictions very well. For the simulations a
loop-cluster algorithm was implemented. This allows us to use improved estimators.
A recently introduced new improved estimator was further optimized. The results
of this master thesis have been published in the Journal of Statistical Mechanics
with the same title as this thesis [1].
Contents
1 Introduction
7
2 Microscopic XY model
9
2.1 The Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 The partition function . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Monte Carlo method
3.1 Loop cluster algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Accounting for the external magnetic field . . . . . . . . . . . . . . .
3.3 Measuring the standard error of the expectation value of observables
3.4 Improved estimator for directly measurable observables . . . . . . . .
3.5 Improved estimator for the probability distribution pe(Φ1 ) . . . . . . .
3.5.1 Improved building of the histogram . . . . . . . . . . . . . . .
3.5.2 Further optimizations with no influence on the resulting histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.3 Further improvements introducing a systematic error . . . . .
3.5.4 Conversion to the probability distribution pe(Φ1 ) . . . . . . . .
3.6 Probability distribution p(Φ) of the magnetization magnitude . . . .
19
20
24
25
25
27
27
4 Predictions of low-energy effective field theory
4.1 Probability distribution p(Φ) of the mean magnetization .
4.2 Constraint effective potential . . . . . . . . . . . . . . . . .
4.3 Other predicted quantities . . . . . . . . . . . . . . . . . .
4.4 Re-weighting the simulations for an external magnetic field
4.5 Determination of the low-energy parameters . . . . . . . .
32
34
36
37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
41
41
43
44
45
5 Comparison of simulation results with the theoretical predictions
5.1 Probability distribution p(Φ) of the magnetization magnitude .
5.2 Centered moments of p(Φ) . . . . . . . . . . . . . . . . . . . . .
5.3 Constraint effective potential u(Φ) . . . . . . . . . . . . . . . .
5.4 Rescaled constraint effective potential U (ψ) . . . . . . . . . . .
5.5 Fit to U (ψ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6 Determination of k0 . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
47
49
50
50
51
54
6 Conclusions
. .
. .
. .
B
. .
57
5
1 Introduction
Strongly coupled 2-dimensional lattice models are used to describe layered systems
like superfluid films or layered ferro- and antiferromagnets. The precursors of Hightemperature superconductors are undoped antiferromagnets.
The (2 + 1)-d quantum XY model can be used to model a magnetic system which
consists of a 2-dimensional layer where the magnetization order parameter can only
take values in the XY plane. Another application are simple models of superfluidity
where a lattice site for e.g. spin up indicates the presence of a hard-core boson, for
spin down its absence.
The quantum XY model consists of spins placed on a lattice. Not all components
of a spin are measurable simultaneously. This is in contrast to classical vector models, e.g. so-called N -vector-models or O(N )-models, where the classical XY model
is the O(2)-model. In the following when we write XY model, its quantum version
is meant.
For the (quantum) XY model, the first and second component of neighboring spins
are interacting. The interaction of third components is fully suppressed. Therefore
the XY model has a global U (1) symmetry. For the model in two or more spatial dimensions, the symmetry breaks down in the zero-temperature infinite-volume limit.
As a special feature of the XY model, the ferromagnetic and the antiferromagnetic
case are unitarily equivalent.
The XY model can be very efficiently simulated with a Monte Carlo loop-cluster
algorithm. This allows us to use improved estimators for the magnetization order
parameter, for the susceptibility, and for the probability distribution of the magnetization’s first component. For this thesis the XY model was simulated on a square
lattice. This results in a model on a bipartite lattice for which numerical simulations
do not suffer from a sign-problem.
As a consequence of the symmetry breakdown, a massless Nambu-Goldstone boson appears. For a magnetic system the Nambu-Goldstone bosons are known as
magnons. At low energy they are the relevant degrees of freedom. Hence, the results of the simulations can be compared with analytic predictions of a low-energy
effective field theory. This low-energy effective field theory was developed in [2–6].
Göckeler and Leutwyler [7, 8] worked out predictions for the shape of the constraint
effective potential for classical 3-d O(N ) models.
7
1 Introduction
Since U (1) = SO(2) ⊂ O(2) the global U (1) symmetry of the quantum XY model
is equivalent to an O(2) symmetry as well. It’s possible to directly use the results of
the classical 3-d O(N ) models for the (2 + 1)-d O(2) quantum XY model, which was
done in [1]. In a precursor of this work the predictions of the low-energy effective
field theory for 3-d O(N ) models were compared with Monte Carlo data for the
antiferromagnetic O(3) quantum Heisenberg model [9].
In this thesis, first the microscopic XY model and its time discretization is described. Then the Monte Carlo methods with the loop-cluster algorithm and improved estimators are introduced. After that the predictions of the low-energy effective field theory are summarized. Finally the simulations are compared to the
theoretical predictions and conclusions are drawn.
8
2 Microscopic XY model
2.1 The Hamiltonian
The Hamiltonian of the quantum XY model is defined as
X
~ · B,
~
H = −J
(Sx1 Sy1 + Sx2 Sy2 ) − M
(2.1)
hxyi
where J is a constant. Position space and spin space are completely separated.
The lattice could be of arbitrary geometry. Here we concentrate on a 2-dimensional
square lattice.
The microscopic XY model is defined on a 2-dimensional square lattice of finite
size, with lattice spacing a and periodic boundary conditions. We only consider
interactions of nearest-neighbor sites on the lattice, which is indicated by the expression hxyi. For a classical system the spins would be ordinary vectors with n
~x are spin 1 operators
components. Here the quantum XY model is studied, so S
2
with the standard commutation relations
[Sxa , Syb ] = iδxy εabc Sxc
with ~ = 1. The total spin is
~=
S
X
(Sx1 , Sx2 , Sx3 )
(2.2)
(2.3)
x
which leads to
[S a , S b ] = iεabc S c .
(2.4)
~ = (B1 , B2 ) enters the Hamiltonian through
A uniform external magnetic field B
~ · B,
~ where
the term −M
~ = (S 1 , S 2 )
M
(2.5)
~ and M
~ are defined in the
is the so-called magnetization order parameter. Both B
XY plane.
~ = 0 only the 3-component of the total spin commutes with the
Even with B
Hamiltonian:
X
X
[S 1 , H] = −2iJ
Sx2 Sy3 ,
[S 2 , H] = 2iJ
Sx3 Sy1 ,
[S 3 , H] = 0.
(2.6)
hxyi
hxyi
9
2 Microscopic XY model
This means that S 3 is a conserved quantity and the XY model has a global U (1)
~ 6= 0 breaks the symmetry
spin symmetry. An applied external magnetic field B
globally from U (1) to {1}, the trivial group containing only the identity.
The system has translation symmetries in x and y direction caused by the periodic
boundary conditions. Further the system is symmetric under rotation by 90°, and
it can be mirrored on the x- or y-axis.
For even size of the lattice in direction x and y, the lattice is so-called bipartite, it
can be broken in two sub-lattices, the first lattice containing the sites with (x1 +x2 )/a
even, the second lattice containing the remaining sites with (x1 + x2 )/a odd.
For J > 0 it is energetically favorable for neighboring spins to point in the same
direction. This models a ferromagnet. For J < 0 neighboring spins would prefer to
be arranged antiparallel. This would model an antiferromagnet. Without an external magnetic field on a bipartite lattice the ferromagnetic and the antiferromagnetic
XY model are equivalent. If one of the sub-lattices is transformed by the unitary
.
transformation U = 2S 3 and the other sub-lattice is left as it is, then indeed
U U † = 1,
U S 1 U † = −S 1 ,
U S 2 U † = −S 2 ,
U S 3U † = S 3.
(2.7)
The effect is that on one of the sub-lattices (Sx1 , Sx2 , Sx3 ) is replaced by (−Sx1 , −Sx2 , Sx3 ).
Inverting the sign of J one gets back the old Hamiltonian. From now on we will use
J > 0 if not indicated otherwise.
2.2 The partition function
In quantum statistical mechanics, on a canonical ensemble, this means when not
energy but temperature of a system is fixed, the partition function is given by
X
Z = Tr exp(−βH) =
hn| exp(−βH)|ni.
(2.8)
|ni
With energy eigenvalues En and eigenstates |ni as the solutions of the timeindependent Schrödinger equation
H|ni = En |ni,
the partition function can be written as
X
Z=
exp(−βEn ).
(2.9)
(2.10)
|ni
There is no known analytical way to diagonalize the Hamiltonian of the XY model.
This means we have to evaluate the partition function on all possible states of a
10
2.3 Time discretization
convenient basis |ni “by hand” which is an effort growing exponentially with the
volume. Even if we write a program which does this for us in a systematic numerical
way, the accessible volumes are extremely small. We therefore will use a Monte Carlo
method described in the next chapter.
The state of the spins, quantized in one direction, will be our basis. We will mostly
quantize in 1-direction and can therefore measure S 1 . If e.g. there is a 1-dimensional
system with 4 spins, the first pointing to the left, the other ones pointing to the
right, this is denoted by
|ni = | ←, →, →, →i.
(2.11)
The spin operators will measure
1
1
Sx=1
|ni = + |ni,
2
1
1
Sx=3
|ni = − |ni,
2
1
1
|ni = − |ni,
Sx=2
2
1
1
|ni = − |ni.
Sx=4
2
(2.12)
Because of the U (1) symmetry the 2-quantization measuring S 2 is equivalent to the
1-quantization.
2.3 Time discretization
Since we will be interested in the zero temperature limit T → 0, β will become large
such that the exponent of −βH is computable with very high orders stochastic series
expansion (SSE) only.
As a first step to a computable solution we divide the exponent in M parts.
With M = β it becomes
exp(−βH) = exp(−H)M .
(2.13)
~ Later
Now we only consider a Hamiltonian without an external magnetic field B.
~
for the 1-quantization we will be able to bring back an external field B = (B1 , 0).
We break the remaining Hamiltonian in k parts, where k is the number of nearestneighbors of a spin
k
X
H=
Hi .
(2.14)
i=1
For a 1-dimensional system a spin has k = 2 nearest neighbors, for a 2-dimensional
system on a square lattice k = 4. Since the lattice is bipartite we can arrange that
for each of the partial Hamiltonians Hi each spin interacts only with one neighbor.
In one dimension e.g. in H1 each spin on an even lattice site interacts with the spin
on the next (odd) site. In H2 each spin on an even site interacts with the spin on the
11
2 Microscopic XY model
previous site. The Hamiltonians Hi can therefore be block-diagonalized. Formally
a partial Hamiltonian can be written as
X
hxy ,
(2.15)
Hi =
hxyii
where hxyii describes constraint interactions as described above. For our XY model
hxy = −J(Sx1 Sy1 + Sx2 Sy2 ).
(2.16)
The partial Hamiltonians don’t commute
[Hi , Hj ] 6= 0 for i 6= j.
(2.17)
However, using the Baker-Campbell-Hausdorff formula and
[Hi , Hj ] = O(2 ) for i 6= j,
(2.18)
the exponential in the partition function becomes
k
k
M Y
M
X
exp(−H)M = exp − Hi
=
exp(−Hi )
+ O(2 ).
i=1
Since
(2.19)
i=1
X
|nihn| = 1,
(2.20)
n
complete sets of eigenstates can be inserted in the expression for the partition function:
Z=
X
k
Y
M
hn0 |
(exp(−Hi )1) |n0 i
n0
=
hnjk+i−1 | exp(−Hi )|njk+i i
l=0
=
i=1
−1 Y
k
MY
k−1 X M
Y
nl
j=0 i=1
MY
k−1 X M
−1 Y
k
Y
l=0
nl
Y
j=0 i=1 hxyii
× hsx,jk+i−1 sy,jk+i−1 | exp(−hxy )|sx,jk+i sy,jk+i i
X
=
exp(−S[s]),
(2.21)
[s]
where [s] stands for all possible spin configurations. This approach is often called
“Trotter decomposition”. To respect boundary conditions the indices jk + i in the
above equation have to be taken modulo kM .
12
2.3 Time discretization
The resulting partition function formally looks like a discretized path integral
in Euclidean time. It’s common to denote β as “time“ and the indices of ni as
Euclidean time steps t. In this form the partition sum can be written as
Y X
Z=
exp(−S[s]),
(2.22)
(x,t) sx,t =
where (x, t) is the set of all the kM V spins. Here V denotes the number of spatial
lattice sites. We call the number of time steps
Lt = kM.
(2.23)
The term exp(−S[s]) in eq.(2.22) is the Boltzmann weight of the spin configuration [s]. The expectation value of an observable can be calculated with
hOi =
1
1X
O[s] exp(−S[s]).
Tr[O exp(−βH)] =
Z
Z
(2.24)
[s]
One can read off from eq.(2.22) that there are 2kM V spin configurations to be
taken into account. This again shows the need for a simulation method. At least
we now no longer have to deal with a huge non-diagonalizable Hamiltonian.
The Trotter decomposition leads to a systematic discretization error. It originates from the Baker-Campbell-Hausdorff formula and is of order O(2 ). Since is inversely proportional to the number of time steps, doubling the steps doubles
the number of spins to consider in the simulation. In return the systematic error is
divided by four. Simulations with various values for have to be performed to compare the systematic error introduced by the Trotter decomposition to the stochastic
error of the simulation.
The time discretization is not mandatory. A continuous time method was introduced in [10]. It avoids the discretization error and the need to perform simulations
for various values for .
Figure 2.1 shows graphically what happens with the time-discretized partition
function of eq.(2.21) for a 1-dimensional system. The shaded areas in the figure are
called plaquettes. Now the partition function can be re-expressed as
Y X
Y
Z=
hsx,t sy,t | exp(−hxy )|sx,t+1 sy,t+1 i
(x,t) sx,t = Plaquettes
=
Y X
Y
(x,t) sx,t = Plaquettes
=
Y X
Y
hsx,t sy,t |
∞
X
1
(−hxy )k |sx,t+1 sy,t+1 i
k!
k=0
hsx,t sy,t |T |sx,t+1 sy,t+1 i.
(2.25)
(x,t) sx,t = Plaquettes
13
2 Microscopic XY model
β 6
n0 r
r
r
r
r
r
r
r
r
n5 r
r
r
r
r
r
r
r
r
n4 r
r
r
r
r
r
r
r
r
n3 r
r
r
r
r
r
r
r
r
n2 r
r
r
r
r
r
r
r
r
n1 r
r
r
r
r
r
r
r
r
n0 r
r
r
r
r
r
r
r
r -
1
2
3
4
5
6
7
8
1
L/a
Figure 2.1: Time-discretized 1-dimensional spin system with V = 8 spatial lattice
sites, 6 time-slices, and periodic boundary conditions. Here Lt = 2M =
β
6, L/a = V = 8, = M
= β3 . The shaded squares are called “plaquettes”.
They connect two spins from one time-slice with the next time-slice.
Here T = exp(−hxy ) is the transfer matrix for one pair of spins. It defines weights
for all possible transitions between the spin states |sx,t sy,t i and |sx,t+1 sy,t+1 i. Here
again t + 1 has to be taken modulo Lt = kM . To evaluate the transfer matrix as a
series, we newly have to insert complete sets of eigenstates. This results in terms of
the form
hsx,t1 sy,t1 | − hxy |sx,t2 sy,t2 i.
(2.26)
Calculating the powers of −hxy with summing over all eigenstates is equivalent to
taking powers of matrices if we denote hxy as a 4 × 4 matrix. The state of sx,t1 sy,t1
then serves as a row index, the state of sx,t2 sy,t2 as a column index.
Figure 2.2 shows an example of a Trotter decomposition of a 2-dimensional system.
Using the standard representation of the Pauli-matrices ~σ
0 1
0 −i
1 0
1
2
3
σ =
, σ =
, σ =
,
1 0
i 0
0 −1
(2.27)
the 3-quantized spin operators are defined as
Si =
with
| ↑i =
14
1
0
σi
2
(2.28)
and | ↓i =
0
1
.
(2.29)
2.3 Time discretization
1
4
r
r
r
r
4
4
4
4
r
1
2
Ly /a 3
r
r
1
r
1
r
1
r
3
r
2
1
r
3
r
1
r
3
3
r
r
3
r
3
r
3
r
4
1
2
3
r
2
4
2
1
r
2
4
2
1
3
2
4
2
r
r
r
2
1
r
4
1
Lx /a
Figure 2.2: Time-discretized 2-dimensional spin system with V = 16 spatial lattice
sites and periodic boundary conditions. The time axis is out of the plane
in the reader’s direction. Here Lx /a = Ly /a = 4. The numbered lines
represent the lines which on the previous figure are the top and bottom
line of a plaquette. The numbers i on the lines represent the index of the
partial Hamiltonian Hi .
15
2 Microscopic XY model
Since
1
S 1 | ↑i = | ↓i,
2
1
S 1 | ↓i = | ↑i,
2
i
S 2 | ↑i = | ↓i,
2
i
S 2 | ↓i = − | ↑i,
2
(2.30)
J
hxy | ↓↑i = − | ↑↓i.
2
(2.31)
it follows that
hxy | ↑↑i = hxy | ↓↓i = 0,
J
hxy | ↑↓i = − | ↓↑i,
2
Writing the states with parallel spins as first
obtain

0 0
J
0 0
hxy = − 

0 1
2
0 0
and last row respectively column we

0 0
1 0 
.
(2.32)
0 0 
0 0
Using e.g. the unitary transformation
1
U=√
2
with
| ←i =
1
0
1 1
1 −1
(2.33)
and | →i =
0
1
.
(2.34)
e for 1-quantization:
we can write Pauli matrices ~σ
σ
e1 = σ 3 ,
σ
e2 = −σ 2 ,
σ
e3 = σ 1 .
(2.35)
e/2 for 1-quantized spins we obtain
~ = ~σ
Now using S
1
S 1 | ←i = | ←i,
2
i
S 2 | ←i = − | →i,
2
1
S 1 | →i = − | →i,
2
i
S 2 | →i = | ←i,
2
(2.36)
such that
J
J
e
hxy | ←←i = −e
hxy | →→i = − | ←←i + | →→i,
4
4
J
J
e
hxy | ←→i = −e
hxy | →←i = | ←→i − | →←i,
4
4
where the tilde on e
hxy indicates ”1-quantized“. This leads to


−1 0
0
1
J 0
1 −1 0 
e
.
hxy = 
0 
4  0 −1 1
1
0
0 −1
16
(2.37)
(2.38)
2.3 Time discretization
Using
a b
b a
k
1
= (a + b)k
2
1 1
1 1
1
+ (a − b)k
2
1 −1
−1 1
(2.39)
gives
exp
−
a b
b a
= exp(−a)
1 0
0 1
× cosh(b)
− sinh(b)
, (2.40)
0 1
1 0
such that the transfer matrices become


J
1 + exp J2
0
0
1
−
exp
2

1
0
1 + exp − J2 1 − exp − J2 0
e

,
T = 
J
J

0
1
−
exp
−
1
+
exp
−
0
2
2
2
J
J
0
0
1 + exp 2 .
1 − exp 2


1
0 0 0
 0 cosh J
sinh J2 0 
2
.
T =
(2.41)
 0 sinh J
cosh J2 0 
2
0
0
0
1
Depending on the sign of J, some of the off diagonal weights become negative. This
can lead to the so-called sign problem. One can show that on a bipartite lattice
with periodic boundary conditions for all spin configurations the resulting weight is
positive.
It’s easy to analytically solve a 1-dimensional system which consists of two spins
for M = 1. Then the time-discretized system has 2 × 2 spins and = β. For the
1-quantized system one can calculate
cosh2 β J4
2
hM1 i =
,
(2.42)
2 cosh2 β J4 − 1 (1 + exp(−βJ))
and for the 3-quantized system
hM3 2 i =
1
2 cosh2 β J2
(2.43)
These results are of no direct physical interest, but they can be used to check
numerical implementations.
Very small systems with e.g. kM V = 16 are numerically exactly solvable by a
brute force calculation, where all the possible 216 spin configurations [s] are evaluated. This is very simple to implement in a program. The results can be used to
check against the simulators, which are more complicated to implement. As can
17
2 Microscopic XY model
be read off from eqs.(2.25) and (2.41), many configurations do not contribute to
the partition function. Even if only one transition between spin states |sx,t sy,t i
and |sx,t+1 sy,t+1 i is of zero weight, the configuration doesn’t count. This is an
additional inefficiency of the brute force method.
18
3 Monte Carlo method
Instead of generating all possible spin configurations to calculate the expectation
value of an observable as described in eq.(2.24), the Monte Carlo method generates
configurations [si ] with probability
p[si ] =
1
exp(−S[si ]).
Z
(3.1)
The expectation value of an observable then is estimated by
hOi =
1 X
O[si ],
N i
(3.2)
where N is the number of configurations [si ]. The Boltzmann weight implicitly
is incorporated by importance sampling. Assuming a Gaussian distribution of the
observable, the square of the standard error of the mean of the estimated observable
is N times smaller than the variance of the observable. Generating four times more
configurations will then only half the standard error of the mean.
One way to directly generate the configurations would be to randomly choose
a spin configuration and then use a rejection sampling based on the Boltzmann
weight. Since the time to generate random spins is not negligible and most spin
configurations are rejected with high probability, this again is an inefficient method.
A sequence of configurations
[s(1) ] → [s(2) ] → ... → [s(N ) ]
(3.3)
is called a Markov chain, if an algorithm generates a configuration [s(i) ] by using
some few previous configurations [s(i−l) ], ..., [s(i−1) ]. In practice the algorithm will
always use the previous configuration [s(i−1) ]. The resulting configurations are then
correlated. The probability to generate a configuration [sj ] from a configuration [si ]
is called transition probability and is denoted by w[si → sj ].
It is necessary that any configuration with non-zero probability can be reached
after a finite number of iterations, independent of which configuration the algorithm
started from. This is called ergodicity. The algorithm generates correctly distributed
configurations, if it additionally fulfills detailed balance:
p[si ]w[si → sj ] = p[sj ]w[sj → si ].
(3.4)
19
3 Monte Carlo method
{
{
{
a
{
@
@
{
@
@
c
b
{
{
@
@
@
@
{
@
@
d
@
@
{
{
Figure 3.1: Spin configuration patterns on plaquettes which can occur according to
the transfer matrices. The points are spins pointing in one direction of
quantization, the crosses are spins pointing in the opposite direction.
For a quantum spin model this means
exp(−S[si ])w[si → sj ] = exp(−S[sj ])w[sj → si ].
(3.5)
The first generated configurations will probably not represent the equilibrium of
the system. The process to achieve the equilibrium is called thermalization. To
build expectation values of observables only configurations after thermalization are
used.
Random numbers are used to generate the configurations according to the transition probabilities. This can be a source of systematic errors. The random number
generator has to be of high quality. The ones delivered with standard libraries of
the common programming languages usually don’t fulfill the requirements of simulations with large statistics. For this project we will use a random number generator
named ranlux which was constructed by M. Lüscher [11].
3.1 Loop cluster algorithm
Figure 3.1 shows spin configuration patterns on plaquettes which can occur according to the transfer matrices in eq.(2.41). For 1-quantized systems all configurations
are allowed, for 3-quantized systems only configurations ’a’, ’b’, and ’c’ are allowed.
Since S 3 is conserved in 3-quantization this reflects spin conservation on the plaquette. Now one can define so-called bonds connecting two spins together. Spins on the
same bond are then locked together by definition. If one spin is flipped the other has
to be flipped as well. A bond can only be set, if flipping the spins on it generates an
allowed spin configuration. Spins that are connected by bonds form a cluster. The
introduction of bonds has to be such that the resulting partition function doesn’t
change
X
X
Z=
exp(−S[s]) =
exp(−S[s, b]).
(3.6)
[s]
[s,b]
Thus the weight of a spin configuration will be partitioned on the possible breakups.
We want to have an algorithm which works for arbitrary volumes. Especially it
20
3.1 Loop cluster algorithm
e
e
e
e
e
e
e
A
e
B
e
@
e
@
@
@
e
@e
C
Figure 3.2: Plaquette breakups. The direction of the spins is not defined here.
should work on the smallest possible system, i.e. the 1-dimensional system with two
spins and M = 1. It consists of two plaquettes. Both plaquettes will have the same
pattern of spin configurations. The probability of a spin configuration is
1
hs1,0 s2,0 |T |s1,1 s2,1 ihs2,1 s1,1 |T |s2,0 s1,0 i
Z
1
= hs1,0 s2,0 |T |s1,1 s2,1 i2 .
Z
p(s1,0 s2,0 s1,1 s2,1 ) =
(3.7)
Detailed balance is then respected if
w(s1,0 s2,0 s1,1 s2,1 → s01,0 s02,0 s01,1 s02,1 )
p(s01,0 s02,0 s01,1 s02,1 )
=
w(s01,0 s02,0 s01,1 s02,1 → s1,0 s2,0 s1,1 s2,1 )
p(s1,0 s2,0 s1,1 s2,1 )
hs01,0 s02,0 |T |s01,1 s02,1 i2
=
.
hs1,0 s2,0 |T |s1,1 s2,1 i2
(3.8)
For the so-called loop-cluster algorithm, on a plaquette each spin should belong
to exactly one bond. Hence, a plaquette always becomes decorated with two bonds.
Figure 3.2 shows the three so-called plaquette breakups which can occur. Since
in time discretization one spin always belongs to exactly two different plaquettes,
together with periodic boundary conditions the bonds form clusters which are closed
loops. All spins of one cluster are then correlated and each spin belongs to exactly
one cluster. We will denote the i-th cluster of a configuration by Ci . The number of
spins in a cluster will be called cluster length, denoted by |Ci |. In our case the length
of a cluster is always a multiple of 2. If for a certain spin configuration the whole
spin system is decorated by plaquette breakups this results in a full decomposition
of the system into clusters. The number of resulting clusters will be denoted by #C.
This results in
#C
X
|Ci | = kM V.
(3.9)
i=1
Remember that kM V is the number of space-time lattice points of the whole timediscretized system.
Now the art is to assign allowed plaquette breakups to the allowed spin configurations. A working rule for the 1-quantized ferromagnetic XY model (J > 0) is to
allow bonds only for parallel spins. Therefore the size of a cluster Ci will contribute
21
3 Monte Carlo method
←
←
←
a
←
←
→
b
→
←
→
←
c
→
←
←
→
→
d
←
1
2
1
2
1
2
1
2
1 + exp J2
J
@
@
@
A
B
C
1 + exp − 2
J
A
@
@
@
1 − exp − 2
J
exp 2
C
−1
B
Figure 3.3: Spin configurations, weights, and breakups for the 1-quantized ferromagnetic XY model.
proportionally to the first component of the total uniform magnetization. Denoting
the sign of the spins of a cluster Ci with sCi , one can write
X sx,t
|Ci |
MCi =
= sCi
,
kM
2kM
(x,t)∈Ci
M1 =
#C
X
MCi .
(3.10)
i=1
Figure 3.3 shows the spin configurations with their weight and the possible breakups
for J > 0. Since we use a bipartite lattice, there is no sign problem and we can
use the absolute value of the weights. For the spin configurations ’b’, ’c’, and
’d’ the breakup is fixed. Since the weight of a configuration is distributed on its
breakups, for those spin configurations the weight of their uniquely defined breakup
corresponds to the weight of the spin configuration.
There always is only one breakup which leads to a transition between two given
spin configuration patterns. For the smallest system consisting of two plaquettes,
and without an external magnetic field, the transition probability becomes
2
1 1
0
0
0
0
p(br |s1,0 s2,0 s1,1 s2,1 ) , (3.11)
w(s1,0 s2,0 s1,1 s2,1 → s1,0 s2,0 s1,1 s2,1 ) =
2 2
where p(br |s1,0 s2,0 s1,1 s2,1 ) is the probability to choose the unique breakup which
leads to the transition from spin configuration s1,0 s2,0 s1,1 s2,1 to spin configuration
s01,0 s02,0 s01,1 s02,1 . The factors 21 represent the probabilities to flip the spins on a bond
such that the correct spin configuration arises, and the square accounts for the two
plaquettes. Inserting eq.(3.11) into eq.(3.8) one can read off that detailed balance
is respected if
hs01,0 s02,0 |T |s01,1 s02,1 i2
p(br |s1,0 s2,0 s1,1 s2,1 )2
=
.
p(br |s01,0 s02,0 s01,1 s02,1 )2
hs1,0 s2,0 |T |s1,1 s2,1 i2
22
(3.12)
3.1 Loop cluster algorithm
↑
↑
↑
a
↑
↑
↓
b
↓
↑
↑
↓
J
↓
A
C
cosh 2
↑
c
@
@
@
1
sinh |J| A
B
@
@
@
2
B
C
Figure 3.4: Spin configurations, weights and breakups for the 3-quantized ferromagnetic XY model.
This even holds with an external magnetic field, since the probabilities to flip spins
cancel in the relation of the transition probabilities w of eq.(3.8). Since probabilities
have to be positive eq.(3.11) simplifies to
w(br) = p(br |s1,0 s2,0 s1,1 s2,1 ) |hs1,0 s2,0 |T |s1,1 s2,1 i|
= p(br |s01,0 s02,0 s01,1 s02,1 ) |hs01,0 s02,0 |T |s01,1 s02,1 i|.
(3.13)
This motivates the weights in figure 3.3 and it shows that w(br), the weight of a
breakup, doesn’t depend on the spin configuration. With eq.(3.13) the probabilities
of the breakups for spin configuration ’a’ are:
1 + exp − J2
J
= exp − pA =
,
2
1 + exp J2
1 − exp − J2
,
pB =
1 + exp J2
exp J2 − 1
.
pC =
(3.14)
1 + exp J2
The probabilities correctly sum to 1 if the system is ferromagnetic (J > 0). Then
the weight of the spin configuration is correctly distributed among the weights of
the breakups.
For a 3-quantized XY model all possible breakups per spin configuration are shown
in figure 3.4. As before each breakup appears on two spin configurations, related
by flipping the spins of one bond. Removing one or two of the breakup pairs, it
wouldn’t be possible to respect detailed balance and the correct distribution of the
weights of the spin configurations among the weights of the breakups. The sign of J
doesn’t play a role. Thus we simulate the ferromagnetic and the antiferromagnetic
case at the same time. The probabilities of the breakups for the spin configurations
23
3 Monte Carlo method
are:
pa,A
pb,A
pc,B
|J|
1
=
1 + exp − ,
2
2
1 + exp − |J|
2 ,
=
2 cosh J2
exp |J|
−1
2
=
,
2 sinh |J|
2
pa,C
pb,B
pc,C
1
|J|
=
1 − exp − ,
2
2
exp |J|
−
1
2
,
=
2 cosh J2
1 − exp − |J|
2
=
.
(3.15)
2 sinh |J|
2
Here it is not obvious what the cluster lengths should represent. In time-direction
bonds connect parallel spins, in space-direction antiparallel ones, but in ”diagonal“ direction the bonds connect parallel spins again. It is possible to measure the uniform magnetization M1 using the resulting cluster lengths as in the
1-quantization [12].
To change the whole system from one spin configuration to the next, the socalled multi-cluster algorithm decorates all plaquettes with breakups according to
the above rules and probabilities, leaving the spins untouched. This defines the
cluster loops. Then each cluster is flipped with probability p = 21 . If the cluster
is flipped, all spins of this cluster change their direction. In this second step the
breakups and thus the bonds remain fixed. This algorithm will be the basis for the
improved estimators described later. It is obvious that in the second step one of the
possible results of the flips is a spin configuration, where all spins point in the same
direction. In the 1-quantized XY model this configuration has minimal energy and
thus the highest Boltzmann weight. It is called the reference configuration and can
be used as the first configuration to start a simulation. Since it is possible to reach
this configuration from each other valid configuration, in reverse it’s possible by
generating the corresponding breakup on the reference configuration to reach each
valid configuration. Hence, at least in principle, in two iterations an arbitrary valid
configuration can transform to another arbitrary valid configuration. Therefore this
algorithm is ergodic.
3.2 Accounting for the external magnetic field
Without an external magnetic field, each cluster is flipped with fifty-fifty probability. Since in the 1-quantized ferromagnetic XY model the length of the cluster
contributes proportionally to the magnetization in 1-direction, the external magnetic
field can be respected. The partition function Z gets the additional factor
#C
Y
|Ci |
~
~
exp β M · B =
exp βB1 sCi
.
2kM
i=1
24
(3.16)
3.3 Measuring the standard error of the expectation value of observables
Here the sign sCi of the magnetization of the cluster is positive if the spins of the
~ = (B1 , 0). Thus the probability
cluster are parallel to the external magnetic field B
for a cluster Ci to point in a certain direction is
p(sCi ) =
1
|Ci |
1 + exp − sCi βB1 kM
.
(3.17)
Here we used
exp(α)
1
=
.
exp(α) + exp(−α)
1 + exp(−2α)
(3.18)
3.3 Measuring the standard error of the expectation
value of observables
The configurations in the Markov chain are correlated. Therefore by calculating
standard deviations of observables their errors are underestimated. One can perform a so-called binning of the measured values. After thermalization, the first b
values of the Markov chain are picked. Their mean value will be the first binned
value. Then the next b values of the Markov chain are picked to build the second
binned value and so on. If one calculates the standard deviation of the binned values
for different values of b, at the beginning one observes an increase of the standard
deviation for increasing b. A plateau of the standard deviation is achieved for some
range of b. This value is an estimate for the error of the observable. Binning too
much (large b), one leaves the plateau and the values become unstable. The simulator implemented for this thesis performs the described binning for each directly
measurable implemented observable like magnetization or square of magnetization.
A simple and established practical approach analogue to binning is to derive the
variation of an observable from multiple simulation runs. For this thesis stochastic
errors were always estimated from 40 simulation runs for each simulation setup.
Both variants were compared on samples of results. The estimated stochastic
error matched well.
3.4 Improved estimator for directly measurable
observables
The multi-cluster algorithm allows to build improved estimators for observables
which are related to the cluster lengths. We are interested in the magnetization and
25
3 Monte Carlo method
its square, which lead to the uniform susceptibility
β
(hM1 2 i − hM1 i2 ).
(3.19)
V
Spins in a common cluster are fully correlated, but different clusters are completely
independent of each other. This even holds for the 1-quantized XY model with an
external magnetic field in 1-direction.
χ1 =
After updating the breakups of the whole system, there is a certain number of
clusters #C and the multi-cluster algorithm has to choose one spin configuration
out of 2#C . Without external magnetic field the choice happens randomly with
equal probability. With an external field the probabilities in eq.(3.17) are respected.
Instead of measuring only the chosen spin configuration, the improved estimator
takes all 2#C possibilities into account.
Without external field for a directly measurable observable O this leads to the
improved estimator of the observable:
X
1 X
hOiimp = #C
O(sC1 , sC2 , ..., sC#C ).
···
(3.20)
2 s =±1
=±1
s
C1
C#C
For the magnetization this leads to
hM1 iimp =
X
1
2#C
···
sC1 =±1
#C
X X
M1 =
#C
X
!2
MCi
i=1
MCi = 0.
(3.21)
i=1 sCi =±1
sC#C =±1 i=1
With
2
MCi =
#C
X
X
=
#C
X
MCi MCj
(3.22)
i,j=1
the improved estimator of the magnetization squared becomes
hM1 iimp
2
#C
X
X X
1
= #C
···
sC sC |Ci ||Cj |
2 (2kM )2 s =±1 s =±1 i,j=1 i j
C1
C#C
X
1
|Ci |2 .
(2kM )2 i=1
#C
=
Hence, the improved estimator for the uniform susceptibility is
* #C
+
X
β
χ1,imp =
|Ci |2 .
V (2kM )2 i=1
(3.23)
(3.24)
Respecting the external field in the 1-direction with eq.(3.17) one obtains
X
X
hOiimp =
···
(3.25)
p(sCi )O(sC1 , sC2 , ..., sC#C )
sC1 =±1
26
sC#C =±1
3.5 Improved estimator for the probability distribution pe(Φ1 )
such that
hM1 iimp
#C
X
X X
1
=
···
p(sCi )sCi |Ci |
2kM s =±1
s
=±1 i=1
C1
C#C
#C
|Ci |
1 X
|Ci | tanh βB1
=
2kM i=1
2kM
(3.26)
and after some calculations
#C
X
1
|Ci |
2
2
2
|Ci | 1 − tanh βB1
hM1 iimp =
(2kM )2 i=1
2kM
+ (hM1 iimp )2 .
(3.27)
Since with an external magnetic field the improved estimator of the uniform magnetization has a certain standard error, the uniform susceptibility in the presence of
an external field is calculated after the simulation only:
β
χ1,imp = (hM1 2 iimp − hM1 i2imp ).
(3.28)
V
3.5 Improved estimator for the probability
distribution pe(Φ1) of the first component of the
magnetization
We now introduce an intensive measure of the magnetization order parameter:
~
. M
~ = (Φ1 , Φ2 ) =
Φ
(3.29)
V
For the 1-quantized model we want to measure the probability distribution pe(Φ1 )
of the first component of the magnetization. Therefore a histogram of the first
component of the magnetization of the generated spin configurations has to be
built. Without an external magnetic field B the histogram and the distribution
are symmetric in the direction of the magnetization. The statistical error can be
reduced dramatically by using an improved estimator introduced in [9] based on a
convolution method. There it was used to simulate the antiferromagnetic quantum
Heisenberg model up to a volume of V = 242 . Some further improvements are
developed in this master thesis. They are also described in [1].
3.5.1 Improved building of the histogram
Instead of inserting in the histogram only the unimproved magnetization per generated spin configuration, all possible 2#C magnetizations of the generated breakups
27
3 Monte Carlo method
of one step in the Markov chain are used. The contribution of one cluster Ci is
proportional to ±|Ci |. Naively implemented this would generate a computational
effort of 2#C per spin configuration. The method introduced in [9] reduces it to a
polynomial effort. There it was demonstrated that with the same amount of computational effort the statistical error of the probability distribution pe(Φ1 ) is reduced
dramatically compared to the unimproved method.
A histogram normalized to 1 of the possible magnetizations of one spin configuration with m̄ ∈ {−M̄ , −M̄ + 1, ..., M̄ } is built iteratively by using one cluster per
iteration. Since in our case the cluster length is always a multiple of 2, we will divide
them by 2, and we obtain
kM V
M̄ =
.
(3.30)
2
Remember that without external field each cluster has two orientations with probability 21 . The initial histogram p1 (m̄) is built from the first accounted cluster with
i
1h
p̄1 (m̄) = δm̄, |C1 | + δm̄,− |C1 | .
2
2
2
(3.31)
The histogram in the i-th iteration with i ∈ 2, 3, ..., #C is built according to
1
|Ci |
|Ci |
p̄i (m̄) = p̄i−1 m̄ +
+ p̄i−1 m̄ −
.
2
2
2
(3.32)
In each iteration the resulting histogram is normalized to 1 by construction. The
partial histograms built by eqs.(3.31) and (3.32) always contain zero values either
for even or for odd m. This holds for arbitrarily distributed cluster sizes |C2i | . For
eq.(3.31) this is obvious. For the iteration in eq.(3.32) one has to convince oneself in
four cases. One is, e.g. the following: let us assume that there is a histogram where
the values for even m are zero. If a divided cluster size which is odd is inserted in
this histogram, then the resulting histogram entries at odd m are zero. Analogous
statements hold for the remaining three combinations.
Additionally, k and V are multiples of 2, hence, M̄ is a multiple of 2 as well.
Therefore the resulting final histogram p̄#C (m̄) has non-zero entries only if m̄ is
multiple of 2.
However, even discarding these zeros for small M̄ the final histogram has strong
artifacts of the Trotter decomposition with several symmetries. They even only
disappear slowly for a mean histogram on several spin configurations as shown in
figure 3.5. There are five ”data-lines“ visible: the systematic zeros and four lines
with the non-zero probabilities. If only each eighth point is picked out, one obtains
smooth lines. This case is shown in figure 3.6.
The 1-dimensional case with k = 0 only shows two non-zero ”lines“. Thus to
28
3.5 Improved estimator for the probability distribution pe(Φ1 )
0.005
0.004
0.003
0.002
0.001
0
-3000
-2000
-1000
0
1000
2000
3000
m
Figure 3.5: Mean Histogram of 10 configurations of a 2-dimensional 1-quantized
model. Here k = 4, Lx = Ly = 8, βJ ≈ 7.05, M = 21 which leads
to J ≈ 0.336.
29
3 Monte Carlo method
0.0001
5e-05
0
-30000
-20000
-10000
0
10000
20000
30000
-20000
-10000
0
10000
20000
30000
-20000
-10000
0
10000
20000
30000
-20000
-10000
0
10000
20000
30000
0.0001
5e-05
0
-30000
0.0001
5e-05
0
-30000
0.0001
5e-05
0
-30000
m
Figure 3.6: For one spin configuration of a 2-dimensional 1-quantized model each
eighth value is picked from the histogram p̄#C . The top histogram contains m̄ = 0, the second m̄ = 2, the third m̄ = 4, and the bottom
histogram contains m̄ = 6. The remaining values are systematically
zero. Here k = 4, Lx = Ly = 8, βJ ≈ 7.05, M = 201 which leads to
J ≈ 0.0351.
30
3.5 Improved estimator for the probability distribution pe(Φ1 )
1×10
-4
8×10
6×10
4×10
2×10
1×10
-5
-4
8×10
-5
6×10
-5
4×10
-5
2×10
0 5
-6×10
5
-4×10
-2×10
5
0
m
5
2×10
4×10
5
5
6×10
-5
-5
-5
-5
0 5
-6×10
5
-4×10
-2×10
5
0
m
5
2×10
4×10
5
5
6×10
Figure 3.7: Two histograms p(m) (after binning) obtained for two different spin configurations using the improved estimator. Here k = 4, Lx = Ly = 16,
βJ ≈ 14.1, M = 564 which leads to J = 0.025.
obtain a smooth histogram the values are binned symmetrically with
k−1
X
1
p̄#C (2km + m̄).
p(m) = [p̄#C (2km − k) + p̄#C (2km + k)] +
2
m̄=1−k
(3.33)
Here with
M̂ =
M̄
MV
=
2k
4
(3.34)
which is a multiple of four, one obtains m ∈ {−M̂ , −M̂ + 1, ..., M̂ }.
Figure 3.7 shows the final histogram p(m) of two different spin configurations. In
the left histogram one cluster is bigger than all the other clusters together, therefore
the region around m = 0 is not sampled. In the right histograms there are two large
clusters of similar size, thus the region around m = 0 is sampled.
The numerical effort to build the histogram for one configuration is proportional
to #C and to M̄ . #C itself is proportional to M̄ . Hence, the effort is polynomial
of order O((M V )2 ). If for different volumes V the systematic error of the time
discretization should not change much, is held constant for all volumes. In the
next chapter we will see that we are interested in cubic systems where β will be
proportional to Lx = Ly = L. Thus with constant , M will be proportional to L
as well. Hence, the numerical effort is of order O(L6 )! The effort to build the
histograms is much larger than the effort to build new spin configurations using the
multi-cluster algorithm itself which is of order O(M V ).
31
3 Monte Carlo method
3.5.2 Further optimizations with no influence on the resulting
histogram
Evaluating eq.(3.32) the size M̄i of the domain around m̄ = 0 with non-zero values
grows with each iteration i according to
M̄i =
i
X
|Cj |,
(3.35)
j=1
assuming the clusters are taken into account in the order of their index. On the
other hand, the border of p̄i (m̄) consist of only zero values for i < #C. Hence,
eq.(3.32) should be evaluated only for |m̄| ≤ Mi .
When this is implemented, it is useful to sort the clusters by their sizes before
building the histogram. One then starts with the smallest cluster, finishing with the
largest. Then Mi grows as slowly as possible and so does the computational effort.
This improvement already accelerates the building of the histogram a lot.
Figure 3.8 shows a histogram of the cluster sizes appearing in one particular spin
configuration. There is a relatively large number of clusters with the same small
cluster size. This is motivation to treat clusters with the same size in one iteration.
Therefore the histogram of cluster sizes is built before generating the histogram of
magnetization. Hence, no more sorting is needed to start with the smallest cluster
for the optimization described above. If we denote the number of clusters of size |C|
by n|C| , for even n|C| a histogram corresponding to clusters all of the same size
becomes
n|C|
X
n|C|
1 2
p̄even (m̄) = n|C|
δ|m̄|,2k|C| .
(3.36)
n|C|
2
+k
2
k=0
For odd n|C| one obtains
p̄odd (m̄) =
n|C| −1
2
X 1
2n|C|
k=0
n|C|
n|C| +1
2
+k
δ|m̄|,(2k+1)|C| .
(3.37)
A technical implementation detail is to use two arrays on iteratively building the
histogram. Alternating their roles in each iteration, from one array the previous
histogram is read in, while on the other array the resulting histogram is written out.
Since the histogram is symmetric, only the part for, e.g., m̄ ≥ 0 needs to be held
in memory. With m̄ ∈ 0, 1, ..., M̄ − 1, M̄ eqs.(3.31) and (3.32) are modified to
1
p̄1 (m̄) = δm̄,|C1 |
2
32
(3.38)
3.5 Improved estimator for the probability distribution pe(Φ1 )
200
150
100
50
0
0
500
1000
cluster size
Figure 3.8: Histogram of cluster sizes of one particular spin configuration of a 2dimensional 1-quantized model. Cluster sizes larger than 1000 spins are
not shown. Here k = 4, Lx = Ly = 32, βJ ≈ 28.2, M = 1128 which
leads to J = 0.025.
33
3 Monte Carlo method
and
p̄i (m̄) =
1
p̄i−1 m + |Ci | + p̄i−1 m − |Ci | .
2
(3.39)
As already discussed above, the histograms built by eqs.(3.31) and (3.32) or (3.38)
and (3.39) always contain zero values either for even or for odd m̄. This can be used
to further accelerate the histogram building.
Finally, all these optimizations don’t alter the resulting information of the histograms. The numerical effort to build the histogram for one configuration remains
of order O((M V )2 ). Some of the above optimizations were already used in [9]. At
least the sorting of the cluster sizes and the binomial treatment of clusters of the
same size was not used. This allows us to simulate volumes up to V = 482 compared
to V = 242 in [9]. If none of the above optimizations is implemented, the simulation
becomes about 100 times slower for large volumes.
3.5.3 Further improvements introducing a systematic error
The size of the generated histograms is of order O(M V ) = O(L3 ). Hence, for large
volumes the handling of the results becomes rather bulky. On the other hand, generating histograms p(m) of the magnetization for different volumes one can estimate
that resampling the histograms to sizes of orders about O(L2 ), their information
content is conserved.
This is the starting point to further accelerate the building of the histograms.
Up to now, since the cluster lengths are a multiple of 2, they were divided by this
factor 2 before processing them to histograms. The idea now is to divide the cluster
lengths by a cluster length divider CLD larger than 2. The number of data points
in the histograms is then related to
M̄ =
kM V
,
CLD
M̂ =
M̄
MV
=
.
2k
2CLD
(3.40)
This method is delicate and leads to systematic errors. To reduce these effects, the
remainders of the cluster length divisions are accumulated and treated in the next iteration. The accumulated remainder decides about the probability to stochastically
round up or down. Probably the actual implementation of this error propagation
technique could be optimized. E.g. one could try fractional values for CLD.
The cluster length divider CLD is chosen as large as possible, with the limitation
that the generated systematic error has to be smaller than the statistical error of
the Monte Carlo simulation. Maintaining a similar systematic error for different
volumes, one observes that the chosen cluster length divider grows in proportion
to L in a 2-dimensional quadratic system. The resulting computational effort to
build a histogram of the magnetization of a spin configuration is shown in figure 3.9.
34
3.5 Improved estimator for the probability distribution pe(Φ1 )
log(t/s)
2
0
-2
1
1.5
2
log(L/a)
Figure 3.9: Runtime to generate the probability distribution pe(Φ1 ) of a quadratic 2dimensional 1-quantized XY model. The horizontal axis represents the
length of one side of the lattice. The vertical axis represents the time
needed per spin configuration to generate the configuration and to build
the distribution. Both axes are logarithmic. Both data lines with error bars represent simulations with all optimizations enabled. For the
upper data line the cluster length dividing optimization is disabled. On
the lower line it is enabled with CLD = L. The lowest short straight
reference line without error bars just under the lower data line has a
slope of O(L3.75 ). The upper reference line just above the higher data
line has a slope of O(L6 ). For these simulations k = 4, Lx = Ly = L,
L
and J = 0.025.
β = 1.13475Ja
35
3 Monte Carlo method
With the two reference lines one can estimate the effort with cluster length divider
about O((M V )1.25 ) = O(L3.75 ) or perhaps something of the form O(M V log(M V ))
instead of O((M V )2 ) = O(L6 ) without dividing the cluster lengths.
Compared to no optimization implemented, the simulation with all described
optimizations enabled becomes more than 1000 times faster for large volume.
The cluster length dividing process allows us to handle non-integer cluster sizes
with the histogram building methods described above. This could be of interest for
continuous-time cluster algorithms.
3.5.4 Conversion to the probability distribution pe(Φ1 )
The result of the simulation is the average of the binned histograms of all configurations after thermalization
p̂(m) = hp(m)i,
(3.41)
which is again normalized to 1:
M̂
X
M̂
X
p̂(m) =
m=−M̂
Since we are in a quantum spin
Φ1 for a certain m becomes
p(m) = 1.
(3.42)
m=−M̂
1
2
system, the first component of the magnetization
Φ1 =
1 1
∈ [− , ].
2 2
2M̂
m
(3.43)
Its probability distribution shall be related to the average of the binned histograms
by
Z m+ 1
2M̂
4M̂
dΦ1 pe(Φ1 ) = p̂(m),
(3.44)
m
2M̂
−
1
4M̂
1
1
with pe(Φ1 ) = 0 for Φ1 ∈
/ [−( 21 + 4M̂
), 12 + 4M̂
]. Thus the distribution gets properly
normalized
Z ∞
Z m+ 1
M̂
X
2M̂
4M̂
dΦ1 pe(Φ1 ) =
dΦ1 pe(Φ1 ) = 1.
(3.45)
−∞
m=−M̂
m
2M̂
−
1
4M̂
We use a piecewise constant pe(Φ1 ) for intervals [−( 2m
+
M̂
obtain
m
pe
= 2M̂ p̂(m),
2M̂
and the distribution becomes continuous for → 0.
36
1
), 2m
4M̂
M̂
+
1
].
4M̂
Thus we
(3.46)
3.6 Probability distribution p(Φ) of the magnetization magnitude
5
4
~
p (Φ1)
3
2
1
0
-0.4
-0.2
0
Φ1
0.2
0.4
Figure 3.10: Probability distribution pe(Φ1 ) with k = 4, Lx = Ly = 16, M = 564
which leads to J = 0.025.
Figure 3.10 shows a typical probability distribution pe(Φ1 ). The error bars are
estimated from 40 simulation runs. After thermalization 106 spin configurations
were generated to obtain the distribution. Remarkably the error bars are on the
order of the line width.
3.6 Probability distribution p(Φ) of the magnetization
magnitude
~ the Hamiltonian in eq.(2.1) has a U (1) symWithout an external magnetic field B,
~ of the magnitude of the magnetimetry. Hence, the probability distribution p Φ
zation is independent of the direction in the XY plane
~ = p(|Φ|) = p(Φ).
p Φ
(3.47)
37
3 Monte Carlo method
The probability distribution pe(Φ1 ) of the first component of the magnetization then
can be expressed as
Z 2π
Z ∞
pe(Φ1 ) =
dϕ
dΦ Φ p(Φ) δ(Φ1 − Φ cos ϕ)
0
0
Z ∞
1
=2
dΦ q
p(Φ).
(3.48)
Φ1
1 − ( ΦΦ1 )2
This relation is known as an Abel transformation. Since for Φ → ∞, p(Φ) and dp(Φ)
dΦ
go to zero faster than Φ1 , it can be inverted
Z
1 ∞
de
p(Φ1 )
1
p
p(Φ) = −
.
(3.49)
dΦ1
π Φ
dΦ1
Φ1 2 − Φ2
The probability distribution pe(Φ1 ) is normalized to 1, thus 2πΦp(Φ) is normalized
as well
Z ∞
2π
dΦ Φ p(Φ) = 1.
(3.50)
0
For the numerical integration in the above inverted Abel transformation the fac1
tor (Φ1 2 − Φ2 )− 2 is analytically integrable. Using a basis of Lagrange polynomials
of second order to interpolate the numerical values of pe(Φ1 ) we obtain
m
m+1
m
ln(m)
p
=
pe
− pe
2M̂
2π M̂
2M̂
2M̂
M̂X
−m+1
m+k+1
m+k
m+k−1
1
+
pe
− pe
+ pe
2π M̂ k=1
2M̂
2M̂
2M̂
p
× ln[m + k + k(2m + k)].
(3.51)
38
4 Predictions of low-energy effective
field theory
The non-perturbative microscopic system described by the Hamiltonian in eq.(2.1)
has so many degrees of freedom that it can’t be solved analytically. It has to be
simulated numerically. As already mentioned, there is no known analytical way to
diagonalize the Hamiltonian. We therefore don’t know the ground state.
Without external magnetic field B, the Hamiltonian has a continuous global U (1)
spin symmetry generated by S 3 . We already know that for J > 0 it is energetically
favorable for neighboring spins to point in the same direction. In the limit of infinite
volume and zero temperature (β → ∞) the expectation value of the magnetization
~ is non-zero. The continuous global U (1) symmetry then sponorder parameter M
taneously breaks down to the trivial subgroup {1}. At low energies the spins are
preferably oriented in the direction of the non-zero mean of the magnetization order
parameter.
For a macroscopic set of neighboring spins an excitation in the XY plane normal
~ i is possible. The excitation can be of arbitrarily small energy (no gap) and
to hM
lives again in U (1). The Hamiltonian of the XY model couples the spins with their
next neighbors. Therefore the excitation is propagating through the system.
According to the Goldstone theorem this new mode of excitation is a massless
Nambu-Goldstone boson. In a magnetic system the Nambu-Goldstone boson is a
spin-wave and is known as a magnon. (The XY model can also be interpreted as a
model of superfluidity).
With an external magnetic field the symmetry U (1) is explicitly broken. Then
the Nambu-Goldstone boson becomes massive and is known as a pseudo-NambuGoldstone-boson.
At low energy the Nambu-Goldstone bosons are the relevant degrees of freedom.
To describe their dynamics, an analytic effective field theory can be constructed.
An effective theory depends on parameters, which are not predicted by the effective theory itself. However, it provides analytic predictions which depend on these
parameters.
The macroscopic excitations of neighboring spins can be described with a magne-
39
4 Predictions of low-energy effective field theory
tization order parameter field ~e(x) of unit-vectors living on a unit-circle:
~e(x) = (e1 (x), e2 (x)) ∈ S 1 ,
~e(x)2 = 1.
(4.1)
The quantum XY model that we consider is 2-dimensional and we are interested in
its dynamics. Hence, the field is defined in Euclidean space-time with x = (x1 , x2 , t).
The action for an effective field theory without external magnetic field contains
all possible terms constructed from the order parameter field which respect the
symmetries of the underlying problem. It is written as a derivative expansion.
For low energy high order derivatives are suppressed. The terms in the action are
scalars and hence take the form of scalar products in order to be invariant under
spin rotations. The term ~e(x)2 is of no interest because it is a constant. Furthermore
2~e(x) · ∂µ~e(x) = ∂µ (~e(x)2 ) = 0.
(4.2)
The surviving terms with the P
smallest number of derivatives which respect the symmetries of the XY model are 2i=1 ∂i~e(x) · ∂i~e(x) and ∂t~e(x) · ∂t~e(x). Parametrizing
the terms and adding a term for the external magnetic field the leading terms in the
effective action take the form
Z
hρ
i
1
~ .
S[~e] = d2 x dt
∂i~e · ∂i~e + 2 ∂t~e · ∂t~e − M~e · B
(4.3)
2
c
The low-energy parameters are the spin stiffness ρ, the spin-wave velocity c, and the
magnetization density M. The partition function then is given by
Z
Z = D~e exp(−S[~e]).
(4.4)
If we now identify x3 = ct, the leading terms in the effective action can be written
in Euclidean space-time rotation-invariant form
Z
1ρ
~ .
S[~e] = d3 x
∂µ~e · ∂µ~e − M~e · B
(4.5)
c 2
This symmetry holds only for the leading terms, since the underlying XY model has
no such Euclidean rotational invariance.
Based on the leading terms, one can use the effective theory for a classical 3dimensional O(2) system also for the quantum XY model, a quantum spin system
where the Hamiltonian is defined in two spatial dimensions.
In [1] the effective field theory predictions of Göckeler and Leutwyler [7, 8] for a
3-d O(N ) model were applied to the quantum XY model. The rest of this chapter
is a summary of the results presented in [1]. Only section 4.4 is more detailed than
the presentation in [1].
40
4.1 Probability distribution p(Φ) of the mean magnetization
4.1 Probability distribution p(Φ) of the mean
magnetization
The theory is considered for a finite cubic L×L×β space-time volume with periodic
boundary conditions with β = L/c which leads to the cubic geometry. Note that we
have put Boltzmann’s constant to kB = 1.
The space-time average of the magnetization (mean magnetization) is given by
Z
Z
1 1
1 1
2
~
Φ=
d x dt ~e(x) =
d3 x ~e(x).
(4.6)
2
3
2L β
2L
The factor 1/2 renormalizes the spins 12 of the microscopic model to the unit-vectors
~e(x). The relation to the magnetization order parameter in the Hamiltonian of
eq.(2.1) is given by eq.(3.29).
Because of the U (1) symmetry, the probability distribution
of the mean magne ~
tization Φ only depends on the magnitude Φ = Φ . The distribution takes the
form
Z
Z
1 1
1
3
~
p(Φ) =
D~e exp(−S[~e]) δ Φ −
d x ~e(x) .
(4.7)
Z
2 L3
It is normalized to 1 with exactly the same eq.(3.50) as in the numerical part. In the
infinite-volume zero-temperature limit p(Φ) becomes a δ-distribution with its peak
at M [1]. Thus it is possible to predict the infinite volume limit of pe(Φ1 ) which the
simulations should approach for large volumes:
(
0
Φ1 > M
pe(Φ1 ) =
(4.8)
√ 12 2 Φ1 < M .
π
M −Φ1
4.2 Constraint effective potential
The constraint effective potential u(Φ) is defined by
p(Φ) = N exp(−L3 u(Φ)),
where the normalization factor N derived in [7] is
1
1
ρL
1
+
O
.
N =
f2 4π 2 ceβ0 /2
L2
M
(4.9)
(4.10)
For a cubic space-time volume one obtains β0 = 1.45385 [6]. The magnetization per
f = Ma2 . An extensive variant of u(Φ) is
spin is M
U (Φ) = L3 u(Φ).
(4.11)
41
4 Predictions of low-energy effective field theory
The constraint effective potential in the limit of infinite volume and zero temperature is denoted by U0 . Remarkably, after some rescaling it doesn’t depend on the
low-energy parameters of the effective theory, it is universal [8]. The dependence
of the constraint effective potential on the volume was worked out by Göckeler and
Leutwyler [7, 8]. Its 1/L expansion is
2
c
1
c
U2 (ψ) + O
,
(4.12)
U (Φ) = U0 (ψ) +
U1 (ψ) +
ρL
ρL
L3
where Φ is rescaled to ψ with
f
ρL Φ − M
.
(4.13)
f
c
M
As a result, the minimum of the constraint effective potential remains near the same
value of ψ for changing volume. With this rescaling, for large volumes U (Φ) becomes
narrow. Hence, with eqs.(4.9) to (4.11) the probability distribution p(Φ) approaches
a δ-function as mentioned in the last section. U0 (ψ) was found to be
Z ∞
exp(−U0 (ψ)) =
dx exp(−ixψ + Γ(ix)),
(4.14)
ψ=
−∞
with
1 X βn ξ n
.
2 n=0 n!
∞
Γ(ξ) =
(4.15)
The shape-coefficients βn can be found in [13].
The 1/L correction U1 (ψ) in the expansion of eq.(4.12) depends on the low-energy
parameters and was found to be
Z ∞
U1 (ψ) = ψ + exp(U0 (ψ))
dx Re[exp(−ixψ + Γ(ix)) Ω(ix)],
(4.16)
0
with
1
ξ2
2
Ω(ξ) = − ξω(ξ) − 2ω(ξ) −
− k0 ξ 2 ,
4
16π 2
∞
X
βn
ξ n−1 .
ω(ξ) =
(n
−
1)!
n=1
(4.17)
Here k0 is a higher-order low-energy constant resulting from
2ρ3
1
(h1 + h2 ) +
.
(4.18)
2
2
Mc
64π 2
The constants h1 and h2 appear in the higher-order terms of the effective action
Z
i
h 2
~ + h2 B
~2 .
(4.19)
∆S[~e] = − d2 x dt h1 ~e · B
k0 =
Here ∆ indicates that this is a correction term to eq.(4.5).
42
4.3 Other predicted quantities
4.3 Other predicted quantities
The first and second moment of the probability distribution p(Φ) without external
magnetic field were calculated by Göckeler and Leutwyler as well,
2
2
c
β
c
β
1
1
1
f 1+
+ 2 2
+O
,
hΦi = M
ρL 2
ρL 8
L3
f2 c2 β2
M
1
2
h(Φ − hΦi) i = 2 2
.
(4.20)
+O
ρL 2
L3
There are two more shape-coefficients, which in the cubic space-time volume are β1 =
0.225785 and β2 = 0.010608 [6].
With M 1 = S 1 (see eqs.(2.5) and (2.3)) the order parameter susceptibility is
defined as
Z β
1
1
χ1 = 2
dt Tr[M 1 (0)M 1 (t) exp(−βH)].
(4.21)
L 0
Z
It can be measured with improved estimators (see eqs.(3.24) and (3.28)). A further
susceptibility defined with M 3 = S 3 is
Z β
1
1
χ3 = 2
(4.22)
dt Tr[M 3 (0)M 3 (t) exp(−βH)].
L 0
Z
It also can be measured with improved estimators [14].
Hasenfratz and Niedermayer [13] have derived analytic predictions for these susceptibilities. Here they are given for a cubical space-time volume,
2
M2 L2 β
1
c
1 c
2
χ1 =
1+
β1 +
(β1 + β2 ) + O
,
(4.23)
2
ρL
2 ρL
L3
and
1
ρ
χ3 = 2 + O
.
c
L3
(4.24)
Comparing eq.(4.23) with eq.(4.20) one obtains
h(Φ − hΦi)2 i + hΦi2 = hΦ2 i =
f2
2χ1 M
2χ1 a4
=
.
L2 βM2
L2 β
(4.25)
The factor 2 is due to the fact that the magnetization vector has two components.
As already mentioned, the U (1) symmetry breaks explicitly when a non-zero ex~ is applied. The Nambu-Goldstone boson becomes a massive
ternal magnetic field B
pseudo-Nambu-Goldstone boson with mass
m2 =
MB
,
ρc2
~
B = B
(4.26)
43
4 Predictions of low-energy effective field theory
at leading order.
In the so-called p-regime with
c
2πρ,
L
mc2 2πρ,
mcL 1.
(4.27)
the field expectation value
2 X
∞
2nβn+1
~ (B) = M
f 1− 1 c
Φ
(mcL)2n−2
8 ρL n=0 n!
2 X
∞
1 c
(n1 + n2 + 1)βn1 +1 βn2 +1
+
(mcL)2n1 +2n2
8 ρL n ,n =0
n1 !n2 !
1 2
2
∞
1 c
1 c X βn+1
1 c
1
1
2n
−
+
(mcL) −
4
2 ρL n=0 n!
8 ρL (mcL)
2 ρL (mcL)2
2 2
2 2
mc
1
mc
3
+ k0
+ O(m )
(4.28)
−
64π 2
ρ
ρ
~ (B) depends on the magnetic field B as well
was given in [1]. The value of Φ
as on the low-energy constant k0 . In the p-regime for the external magnetic field
chosen in the 1-direction, B is sufficiently large such that the spins only fluctuate
~ = (B1 , 0) and thus B1 = B the simple relation
around the 1-direction. Hence, for B
~ (B) = |hΦ1 i(B1 )|
Φ
(4.29)
holds. Thus there is another way to determine the constant k0 from the simulations.
4.4 Re-weighting the simulations for an external
magnetic field B
One way to obtain the field expectation value |hΦ1 i(B1 )| of the previous section is
to perform Monte Carlo simulations with an external magnetic field and improved
estimators as described in section 3.4. If one is interested in results for more than
a few different B1 or if the simulation with external field is not implemented yet,
then it is alternatively possible to re-weight the probability distribution pe(Φ1 ). We
found that this works very well at least for values of B in the p-regime.
With eq.(3.29) and the external magnetic field in 1-direction, the additional Boltzmann factor in the partition function for the (2 + 1)-dimensional cubic quantum
XY model becomes
βΦ1 L2 B1
~
~
.
(4.30)
exp β M · B = exp(βΦ1 V B1 ) = exp
a2
44
4.5 Determination of the low-energy parameters
Thus the probability distribution obtained from Monte Carlo simulations without
external magnetic field can be re-weighted with
1
βΦ1 L2 B1
pe(Φ1 , B1 ) =
,
(4.31)
pe(Φ1 ) exp
Z(B1 )
a2
Z
where
Z(B1 ) =
∞
−∞
pe(Φ1 ) exp
βΦ1 L2 B1
.
a2
(4.32)
The numerical integration for |hΦ1 i(B1 )| is performed using a basis of Lagrange
polynomials of second order.
4.5 Determination of the low-energy parameters
The effective field theory doesn’t provide any prediction for its low-energy parameters. They have to be determined by matching results of the effective theory to
numerical simulations of the underlying microscopic system. In [15] the parameters
have been determined to be
M = 0.43561(1)/a2 ,
ρ = 0.26974(5)J,
c = 1.1347(2)Ja.
(4.33)
These low-energy constants are obtained by fitting the Monte Carlo results of the
improved estimators for χ1 and χ3 to their analytic predictions.
Once the low-energy parameters are fixed with sufficient precision, the analytic
predictions of the effective theory can be calculated and compared to Monte Carlo
simulations. This is done in the next chapter.
45
5 Comparison of simulation results
with the theoretical predictions
All simulations were performed with a time discretization such that J = 0.025.
The resulting discretization error is smaller than the statistical errors. This was
tested on small volumes 162 and 242 comparing the resulting constraint effective
potentials U (Φ) obtained with various values of .
Statistical errors were estimated by performing 40 independent simulations for
each simulation parameter set. The uncertainties of the low-energy parameters of
eq.(4.33) were incorporated by error propagation.
Thanks to the optimizations of the improved estimator, it was possible to simulate
up to volume 642 to obtain the figures 5.1 to 5.6 and up to volume 962 for the reweighted data in table 5.3.
5.1 Probability distribution p(Φ) of the magnetization
magnitude
Figure 5.1 shows the probability distribution 2πΦp(Φ) of the magnitude of the magnetization Φ for some selected volumes. The distribution is asymmetric. It was
created from the probability distribution pe(Φ1 ) of the first component of the magnetization using the numerical integration described in eq.(3.51), where Lagrange
polynomials of second order were used. Higher orders were tested, but were found
not to be advantageous. For increasing L/a, which is equivalent to increasing volume V , the mean value of the distribution decreases and the distribution becomes
narrower. With higher volumes it should approach the theoretical infinite volume
f = 0.43561(1), such that the distribution p(Φ) would
and zero temperature limit M
approach a δ-function. Figure 5.1 confirms this.
For all simulated volumes the error of the numerical norm of p(Φ) was less than 2×
10−5 .
47
5 Comparison of simulation results with the theoretical predictions
200
L=8a
L = 12 a
L = 16 a
L = 24 a
L = 64 a
2πΦp(Φ)
150
100
50
0
0.42
0.44
0.46
Φ
0.48
0.5
Figure 5.1: Probability distributions 2πΦp(Φ) for L/a ∈ {8, 12, 16, 24, 64}. Additionally to the distributions, error bars are placed in equidistant positions
f = 0.43561(1) for infinite
of Φ. The theoretical prediction of Φ = M
volume and zero temperature is indicated by a vertical line.
48
5.2 Centered moments of p(Φ)
L/a
8
12
16
20
24
28
32
40
48
64
hΦiMC
0.46205(3)
0.45305(10)
0.44875(3)
0.44607(9)
0.44432(10)
0.44307(9)
0.44198(8)
0.44086(9)
0.43999(8)
0.43880(9)
hΦitheory
h(Φ − hΦi)2 iMC h(Φ − hΦi)2 itheory
0.46224(1) 1.90(7)×10−4
2.7831(1)×10−4
−4
0.45319(1)
1.2(3)×10
1.23694(6)×10−4
0.44873(1)
4.6(8)×10−5
6.958(4)×10−5
0.44608(1)
4(2)×10−5
4.453(2)×10−5
−6
0.44432(1) 7(27) ×10 *
3.092(2)×10−5
0.44306(1) 4(22) ×10−6 *
2.272(1)×10−5
0.44212(1) 4(2) ×10−5 *
1.7394(9)×10−5
0.44081(1) 9(25) ×10−6 *
1.1132(6)×10−5
−7
0.43994(1) 6(200) ×10 *
7.731(4)×10−6
0.43885(1) 4(250) ×10−7 * 4.3495(2)×10−6
Table 5.1: First centered moment hΦi and second centered moment h(Φ − hΦi)2 i of
the probability distribution p(Φ) of the magnetization magnitude. The
Monte Carlo data (MC) are compared to the effective field theory predictions for several L/a. The entries with an asterisk (*) are statistically
consistent with zero.
5.2 Centered moments of p(Φ)
The data of the probability distribution p(Φ) was used to compute the first centered
moment hΦi and the second centered moment h(Φ − hΦi)2 i for several volumes.
Lagrange polynomials of second order were used in the numerical integration. Higher
orders were tested as well. The results are listed in table 5.1. They are compared
to the predictions of the effective field theory at order O(1/L2 ) given in eq.(4.20).
The uncertainties of the low-energy parameters of eq.(4.33) are respected by error
propagation. Hence, in the table an error is indicated for the theoretical values
as well. For L/a ≥ 16 the first moment calculated from Monte Carlo data agrees
very well with the one predicted by the effective theory. For the second moment
calculated from Monte Carlo data the statistical error is larger than the absolute
value in most cases. Still, for small L, there seem to be systematic discrepancies.
Higher-order terms in the effective action are not worked out yet. Hence, one can
try to account for them by adding O(1/L3 ) corrections in eq.(4.20) parametrized by
unknown coefficients α1 and α2 , such that
3
c β1
c2 β12
1
c
f
hΦi = M 1 +
+ 2 2
+O
,
+ α1
ρL 2
ρL 8
ρL
L4
3
f2 c2 β2
M
c
1
2
h(Φ − hΦi) i = 2 2
+ α2
+O
,
(5.1)
ρL 2
ρL
L4
Fitting the coefficients to the Monte Carlo data results in α1 = 0.0013(2) and α2 =
−0.00061(5). Hence, the effective field theory describes the results obtained by the
simulations very well.
49
5 Comparison of simulation results with the theoretical predictions
0.01
u(Φ)
L=8a
L = 12 a
L = 16 a
L = 24 a
L = 64 a
0
-0.01
0.44
Φ
0.46
Figure 5.2: Intensive variant of the constraint effective potential u(Φ) for L/a ∈
{8, 12, 16, 24, 64}.
5.3 Constraint effective potential u(Φ)
The constraint effective potential u(Φ) can be calculated from the probability distribution p(Φ) of the magnetization magnitude using eqs.(4.9) and (4.10). The
uncertainty of the normalization factor N implied by the one of the low-energy
parameters of eq.(4.33) is respected by error propagation.
Figure 5.2 shows the constraint effective potential for the same volumes as in
figure 5.1. With increasing volume the potential approaches the effective potential
which is known to be a convex function [16].
5.4 Rescaled constraint effective potential U (ψ)
For the same volumes as before, in figure 5.3 the extensive quantity U (Φ) is shown as
a function of the rescaled variable ψ according to eq.(4.13). Again the uncertainties
50
5.5 Fit to U (ψ)
-2
L=8a
L = 12 a
L = 16 a
L = 24 a
L = 64 a
U0
U(Ψ)
-2.5
-3
-3.5
-4
-4.5
0
0.05
0.1
Ψ
0.15
0.2
Figure 5.3: Extensive variant of the rescaled constraint effective potential U (ψ) for
L/a ∈ {8, 12, 16, 24, 64}. The thicker line without error bars is the infinite volume zero temperature limit U0 (ψ) predicted by the effective theory.
entering from the low-energy parameters are respected by error propagation. In addition, the infinite volume zero temperature result U0 (ψ) which is given in eq.(4.14)
is shown.
In the following, to extract values of the rescaled constraint effective potential U (ψ) for a specific ψ interpolation of first order will be used. Again higher-orders
were tested with no resulting advantage.
5.5 Fit to U (ψ)
Now the Monte Carlo data for U (ψ) are fitted to the 1/L expansion given in
eq.(4.12). For this purpose, for ψ fixed to a certain value, rescaled constraint effective potentials U (ψ) obtained in Monte Carlo simulations at several values of L/a
between 10 and 64 were used. The whole process was repeated for several values
of ψ. This yields values for U0 (ψ), U1 (ψ), and U2 (ψ). The latter fit function U2 will
51
5 Comparison of simulation results with the theoretical predictions
-4.05
-4.06
U
-4.07
-4.08
-4.09
-4.1
-4.11
0
0.05
0.1
0.15
0.2
a/L
Figure 5.4: Fit of the Monte Carlo data for U (ψ) for several L to the 1/L expansion
of eq.(4.12) for ψ = 0.08, The error bars represent the Monte Carlo data.
The continuous line with an error band represents the fit. The rightmost
point for L = 8 was not included in the fit.
not be of further interest. U0 (ψ) is compared to its theoretical prediction which is
universal and is given in eq.(4.14). There are no adjustable parameters.
Figure 5.4 shows the resulting fit of the Monte Carlo data to the 1/L expansion
for one value of ψ.
Figure 5.5 and table 5.2 show the resulting fit parameter U0 for several ψ ∈ [0, 0.2].
This interval is located around the minimum of the constraint effective potential.
There is a remarkably good agreement with the theoretical predictions for U0 (ψ)
which is represented by the line without error bars in the figure.
52
5.5 Fit to U (ψ)
-2.5
U0(Ψ)
-3
-3.5
-4
-4.5
0
0.05
0.1
Ψ
0.15
0.2
Figure 5.5: The line is the analytic prediction of the universal function U0 (ψ) from
the effective field theory. The dots with error bars represent the fit parameter U0 (ψ) obtained from the Monte Carlo simulations.
ψ
0
0.05
0.1
0.15
0.2
U0 (ψ)MC U0 (ψ)theory
- 2.985(6)
- 2.980
- 3.781(4)
- 3.781
- 4.229(2)
- 4.232
- 4.222(2)
- 4.224
- 3.632(5)
- 3.640
Table 5.2: Comparison of Monte Carlo data (MC) for the universal function U0 (ψ)
with the effective theory prediction.
53
5 Comparison of simulation results with the theoretical predictions
0.6
k0 = -0.0027(2)
U1(Ψ)
0.4
0.2
0
-0.2
0
0.02
0.04
0.06
0.08
0.1
Ψ
0.12
0.14
0.16
0.18
Figure 5.6: The dots with error bars represent the fit parameter U1 (ψ)MC obtained
from the Monte Carlo simulations. The line is the result of a fit of k0
to U1 (ψ)MC .
5.6 Determination of k0
We can use the Monte Carlo result for U1 (ψ) to determine the low-energy constant k0 . In [1] the Monte Carlo data of U1 (ψ) were fitted to the effective field
theory predictions described by eqs.(4.16) and (4.17). The best fit was found for
k0 = −0.0027(2). Both the numerical and the analytic results for U1 (ψ) are shown
in figure 5.6.
Additionally, in [1] k0 was determined by using a fit to the theoretical prediction of the expectation value of the magnetization described by eq.(4.28). This
~ Up to now, the simulations were
value depends on the external magnetic field B.
performed without an external field. The expectation value of the magnetization
f. Eq.(4.28) was derived in the p-regime defined by eq.(4.27). With eq.(4.26)
was M
54
5.6 Determination of k0
~ (B)
Φ
L/a
B/J
64 0.00306135 0.44053(3)
64
0.0042
0.44142(4)
72
0.00328
0.44076(4)
72
0.00386
0.44121(7)
80
0.00266
0.44028(4)
80
0.00313
0.44069(5)
88
0.0022
0.43985(3)
88
0.00258
0.44021(6)
96
0.00184
0.43950(3)
96
0.00217
0.43984(5)
~ (B) which are used in the determination of k0 .
Table 5.3: Monte Carlo data for Φ
this leads to
c
= 4.2a,
ρs
ρs 3
B 2
= 3.499 × 10−2 J,
c Ms
ρs
Ja2
B
=
0.61922
.
Ms L2
L2
L
(5.2)
We used two ways to obtain |hΦi(B)| = |hΦ1 i| from the Monte Carlo data
of pe(Φ1 ). For table 5.3, pe(Φ1 ) was re-weighted according to eq.(4.31). Then the
resulting pe(Φ1 , B) was used to calculate |hΦ1 i|. Using L/a between 64 and 96 and
fitting to eq.(4.28), the resulting low-energy constant is k0 = 0.0026(3). This perfectly matches the k0 obtained from U1 (ψ).
The above results show that the re-weighting technique worked well. Additional
Monte Carlo simulations with external magnetic field B and modified improved
estimators according to eq.(3.26) were also performed. For each L and for the
largest B in table 5.3 simulations were carried out. The results of the improved
estimator were compared to those in table 5.3. Since for the re-weighting the largest
B values are most challenging, but the results matched, this again shows that the
re-weighting was credible.
For each value of B a separate simulation has to be performed for the improved
estimator. The practical advantage of the re-weighting is that one simulation can
be re-weighted to arbitrary B values in the p-regime.
55
6 Conclusions
Precise Monte Carlo simulations for the 2-dimensional quantum XY model in the
cubical regime were performed using an improved estimator for the probability distribution of the first component of the magnetization. For the same model low-energy
constants determined in [17] were used for predictions with a low-energy effective
field theory. The functions and constants related to the probability distribution of
the first component of the magnetization obtained from the Monte Carlo simulations
agree very well with the theoretical predictions.
Additional low-energy parameters were calculated. One parametrizes higher-order
terms in the effective action, and two of them parametrize higher-order terms of the
theoretical prediction of centered moments of the magnetization magnitude.
The improved estimator for the probability distribution pe(Φ1 ) introduced in [9]
allows us to generate the constraint effective potential U0 from Monte Carlo data
with high precision. The optimizations made in this thesis reduce the computational
effort for this estimator again by a factor more than 1000 for large volumes. This
allowed spatial volumes as large as 642 . Despite this progress the reduction of the
statistical error for the constraint effective potential U0 and for the function U1 was
moderate. Looking at figure 5.4, investments in small L with lower and much more
statistics could be interesting. A larger impact is expected for the optimizations
of
~
the improved estimator together with the re-weighting used to calculate Φ (B) in
the p-regime. A project where this technique is used is currently under investigation.
The quantitative agreement of high precision of the simulation results with the
the effective field theory are very encouraging. In QCD, the same method is used,
comparing lattice QCD results to predictions of a chiral perturbation theory. One
can expect, in a long-term view, results with similar precision could be reached in
QCD.
57
Acknowledgements
I like to thank Prof. Dr. Uwe-Jens Wiese for this assignment, for his supervision
and motivation, for the chance to participate in a publication, and for the many
interesting and informative discussions. I like to thank Dr. Urs Gerber for his
coaching and for our pleasant collaboration, as well as Dr. Fu-Jiun Jiang for the
collaboration. I also like to thank Dr. Markus Moser for his support with the
number crunching infrastructure.
Finally all this would not have been possible without the support of my lovely
wife Madlen. Thank you for your love, and your patience. During this thesis our
sweet daughter Cindy Fabienne was born. Welcome to this world. You changed our
whole life.
59
Bibliography
[1] U. Gerber, C. P. Hofmann, F.-J. Jiang, G. Palma, P. Stebler, and U.-J. Wiese,
arXiv:1102.3317, accepted for publication in J. Stat. Mech.
[2] S. Chakravarty, B. I. Halperin, and D. R. Nelson, Phys. Rev. B39 (1989) 2344.
[3] H. Neuberger and T. Ziman, Phys. Rev. B39 (1989) 2608.
[4] D. S. Fisher, Phys. Rev. B39 (1989) 11783.
[5] P. Hasenfratz and F. Niedermayer, Phys. Lett. B268 (1991) 231.
[6] P. Hasenfratz and F. Niedermayer, Z. Phys. B92 (1993) 91.
[7] M. Göckeler and H. Leutwyler, Nucl. Phys. B350 (1991) 228.
[8] M. Göckeler and H. Leutwyler, Phys. Lett. B253 (1991) 193.
[9] U. Gerber, C. P. Hofmann, F.-J. Jiang, M. Nyfeler, and U.-J. Wiese, J. Stat.
Mech. (2009) P03021.
[10] B. B. Beard and U.-J. Wiese, Phys. Rev. Lett. 77 (1996) 5130.
[11] M. Lüscher, Computer Phys. Comm. 79 (1994) 100.
[12] R. Brower, S. Chandrasekharan and U.-J. Wiese, Physica A, 261 (1998), 520.
[13] P. Hasenfratz and H. Leutwyler, Nucl. Phys. B343 (1990) 241.
[14] U.-J. Wiese and H.-P. Ying, Z. Phys. B93 (1994) 147.
[15] F. J. Jiang and U.-J. Wiese, arXiv:1011.6205.
[16] L. O’Raifeartaigh, A. Wipf, and H. Yoneyama, Nucl. Phys. B271 (1986) 653.
[17] F. J. Jiang, Phys. Rev. B83 (2011) 024419.
61