* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Studies of Infinite Two-Dimensional Quantum Lattice
Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup
Quantum field theory wikipedia , lookup
Quantum electrodynamics wikipedia , lookup
Molecular Hamiltonian wikipedia , lookup
Many-worlds interpretation wikipedia , lookup
Quantum fiction wikipedia , lookup
Hydrogen atom wikipedia , lookup
Relativistic quantum mechanics wikipedia , lookup
Particle in a box wikipedia , lookup
Density matrix wikipedia , lookup
Path integral formulation wikipedia , lookup
Renormalization wikipedia , lookup
Coherent states wikipedia , lookup
Scalar field theory wikipedia , lookup
Renormalization group wikipedia , lookup
Quantum decoherence wikipedia , lookup
Bell's theorem wikipedia , lookup
Interpretations of quantum mechanics wikipedia , lookup
Orchestrated objective reduction wikipedia , lookup
Tight binding wikipedia , lookup
EPR paradox wikipedia , lookup
History of quantum field theory wikipedia , lookup
Quantum computing wikipedia , lookup
Quantum key distribution wikipedia , lookup
Quantum machine learning wikipedia , lookup
Quantum teleportation wikipedia , lookup
Hidden variable theory wikipedia , lookup
Quantum state wikipedia , lookup
Canonical quantization wikipedia , lookup
Symmetry in quantum mechanics wikipedia , lookup
Quantum entanglement wikipedia , lookup
Quantum group wikipedia , lookup
Studies of Infinite Two-Dimensional Quantum Lattice Systems with Projected Entangled Pair States By Jacob Jordan B.Eng. (Hons 1st, 2003), UQ A thesis submitted for the degree of Doctor of Philosophy at The University of Queensland in January 2011 School of Physical Sciences c Jacob Jordan, 2011. ! Produced in LATEX 2ε . This thesis is composed of my original work, and contains no material previously published or written by another person except where due reference has been made in the text. I have clearly stated the contribution by others to jointly-authored works that I have included in my thesis. I have clearly stated the contribution of others to my thesis as a whole, including statistical assistance, survey design, data analysis, significant technical procedures, professional editorial advice, and any other original research work used or reported in my thesis. The content of my thesis is the result of work I have carried out since the commencement of my research higher degree candidature and does not include a substantial part of work that has been submitted to qualify for the award of any other degree or diploma in any university or other tertiary institution. I have clearly stated which parts of my thesis, if any, have been submitted to qualify for another award. I acknowledge that an electronic copy of my thesis must be lodged with the University Library and, subject to the General Award Rules of The University of Queensland, immediately made available for research and study in accordance with the Copyright Act 1968. I acknowledge that copyright of all material contained in my thesis resides with the copyright holder(s) of that material. i ii Statements of Contributions Statement of Contributions to Jointly Authored Works Contained in this Thesis • J. Jordan, R. Orús, G. Vidal, F. Verstraete and J. I. Cirac, Classical Simulation of Infinite-Size Quantum Lattice Systems in Two Spatial Dimensions, Physical Review Letters, 101, 250602, (2007). This paper outlined the first iPEPS algorithm for computing the ground state of infinite 2D quantum lattice systems. The main ideas behind this algorithm were devised by GV, FV and JIC. Most of the implementation of the algorithm was performed by JJ and RO. The paper was written by RO and GV. • J. Jordan, R. Orús and G. Vidal, Numerical study of the hard-core Bose-Hubbard model on an infinite square lattice, Physical Review B, 79, 174515, (2009). This paper details the application of the iPEPS algorithm to the hard-core Bose-Hubbard model. The simulations were performed by JJ. The paper was written by GV with assistance from JJ and RO. Statement of Contributions by Others to the Thesis as a Whole This thesis was completed under the supervision of Prof. Guifre Vidal and Dr. Roman Orus. The results of Chapter 6 are based on an algorithm conceived by RO and GV (Ref. [OV08]). Chapter 8 is heavily based on a paper by JJ, GV and RO (Ref. [JOV09]). Chapter 11 extends upon work in 1D by RO. The structure of this chapter was suggested by RO. The thesis was otherwise entirely written by the author. Statement of Parts of the Thesis Submitted to Qualify for the Award of Another Degree iii None. Published Works by the Author Incorporated into the Thesis • J. Jordan, R. Orús, G. Vidal, F. Verstraete and J. I. Cirac, Classical Simulation of Infinite-Size Quantum Lattice Systems in Two Spatial Dimensions, Physical Review Letters, 101, 250602, (2007). This publication is the basis for Chapters 5 & 7. • J. Jordan, R. Orús and G. Vidal, Numerical study of the hard-core Bose-Hubbard model on an infinite square lattice, Physical Review B, 79, 174515, (2009). This publication is the basis for Chapter 8. Additional Published Works by the Author Relevant to the Thesis but not Forming Part of it • P. Corboz, J. Jordan and G. Vidal Simulation of 2D fermionic lattice models with Projected Entangled-Pair States: Next-nearest neighbor Hamiltonians, Physical Review B, 82, 245119, (2010). iv Acknowledgments I would firstly like to thank my supervisor Prof. Guifre Vidal for all of his support and guidance throughout my PhD. It has been a great privilege to work alongside such an accomplished scientist. I would also like to thank my co-supervisor Dr. Roman Orus. Roman and I worked very closely on this particular project and his support, patience, encouragement and positive frame of mind were exceedingly important. I would also like thank my colleagues in the University of Queensland Physics Department for their comradeship over the last few years. A PhD is by nature a lonely path and their friendship has been important. I would like to acknowledge the financial support provided by the Australian Research Council, Prof. Vidal, the University of Queensland School of Mathematics and Physics and the University of Queensland Graduate School. In December of 2008 I visited the research group of Prof. Immanuel Bloch in Mainz as a guest of Dr. Belen Paredes. I would like to thank Belen for her help in arranging a most enjoyable visit. Outside of UQ, I have enjoyed an association with Oxley United Football Club for many years as a player, coach and committee member. This has not only been a place to unwind, but has taught me the importance of community and volunteer spirit - values that can easily dim in the singular vision of the research scientist. I thank the many close friends of mine I have met through the club for the good times we have had. Most of all, I would like to thank my parents, Peter and Linda Jordan, my brothers Luke and Will, my sister Hannah and all of my extended family for their inspiration, love and support. It is for you that I dedicate this thesis. v vi Abstract Determining the properties of quantum many-body systems is a central challenge in modern physics. Being able to determine the macroscopic properties of a system from its microscopic description would hasten progress in many fields of science and technology. However, we currently lack the tools to solve such problems generally, or even to develop a theoretical intuition about how many systems might behave. From a simple Hamiltonian description of the system, one may obtain complex, highly correlated collective behaviour. Computational techniques have played a major part in the effort to determine properties of quantum many-body systems. However, as the total degrees of freedom in the system scales exponentially in the system size, numerical diagonalization of the Hamiltonian quickly becomes computationally intractable and one must develop more efficient approximate techniques to explore the system. Present numerical methods such as quantum Monte Carlo and series expansion have provided insight into many systems of interest, but are also held back by fundamental difficulties. In this thesis, we focus on tensor networks, a relatively new ansatz for representing quantum many-body states. Tensor networks are motivated by two ideas from quantum information: firstly, that quantum entanglement is the source of the immense difficulty of simulating quantum systems classically, and secondly that the ground states of certain Hamiltonians exist in a low-entanglement region of the entire Hilbert space. The strength of tensor networks is that they provide a systematic way of representing this class of lowentanglement quantum states. In particular, this thesis describes the iPEPS algorithm for computing the ground states of infinite, two-dimensional quantum lattice systems based on the Projected Entangled Pair States (PEPS) ansatz. We then benchmark the algorithm by computing the phase diagrams of several systems that have been studied with other techniques. Lastly, we apply our algorithm to problems that are not well solved by current approaches, such as frustrated spin systems. vii Keywords: Projected entangled pair states, quantum many-body systems, simulation algorithms, tensor networks, quantum entanglement, quantum information. Australian and New Zealand Standard Research Classifications (ANZRC): 020401 Condensed Matter Characterisation Technique Development (50%), 020603 Quantum Information, Computation and Communication (50%). viii List of Abbreviations and Symbols TN Tensor network MPS Matrix product state PEPS Projected entangled pair state TPS Tensor product state CTM Corner transfer matrix CTMRG Corner transfer matrix renormalization group DMRG Density matrix renormalization group QMC Quantum Monte Carlo MF Mean-field ix x Contents List of Tables xvii List of Figures xix 1 Introduction 1 2 The Quantum-Classical Correspondence 5 3 Foundations of Tensor Networks 7 3.1 Basic tensor operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Tensor Network Descriptions of Quantum States . . . . . . . . . . . . . . . 14 3.3 Efficient Representations of Quantum States . . . . . . . . . . . . . . . . . 21 4 Dimension and Computational Complexity 8 25 4.1 D = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 D = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3 D = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5 Projected Entangled Pair States 37 xi 5.1 Projected Entangled Pair States . . . . . . . . . . . . . . . . . . . . . . . . 37 5.2 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.3 The iPEPS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.4 Computing Physical Properties of PEPS States . . . . . . . . . . . . . . . 46 5.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6 The 2D Classical Ising Model 57 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7 The 2D Quantum Ising Model 65 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 7.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 8 The Hard-Core Bose-Hubbard Model 73 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 8.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 8.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 9 The Quantum Potts Model 9.1 87 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 xii 9.2 The quantum Potts Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.3 q = 3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 9.4 q = 4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 10 The J1 -J2 Model 107 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 10.2 The J1 -J2 Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 10.3 Columnar and Plaquette Ordered Ground States . . . . . . . . . . . . . . . 112 10.4 Algorithmic Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 112 10.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 10.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 11 Geometric Entanglement 125 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 11.2 Computing the Closest Product State . . . . . . . . . . . . . . . . . . . . . 127 11.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 11.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 12 Conclusion 137 12.1 Thesis Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 12.2 Final Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 xiii A Infinite MPS Methods for Computing the Environment 141 A.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 A.2 Evolution of an Infinite MPS by Non-unitary Operators . . . . . . . . . . . 151 A.3 MPS-based Contraction Schemes for PEPS . . . . . . . . . . . . . . . . . . 155 B Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice 165 B.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 B.2 The Corner Transfer Matrix Renormalization Group for iPEPS . . . . . . . 166 B.3 Coarse-Graining Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 168 B.4 The Directional CTMRG Approach . . . . . . . . . . . . . . . . . . . . . . 169 B.5 An Improved Directional Algorithm . . . . . . . . . . . . . . . . . . . . . . 170 B.6 Recent Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 C Update Schemes for PEPS tensors 175 C.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 C.2 A Variational Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 C.3 Conjugate Gradient Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 178 C.4 Comparison of the Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 181 References 185 xiv xv xvi List of Tables 5.1 Leading order computational complexity of various iPEPS algorithms for the square lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7.1 Critical point and exponent β as a function of D. . . . . . . . . . . . . . . 70 9.1 The location of the phase transition for various versions of the algorithm and different D. The ’lowest energy’ solution is taken from the lowest energy ground states on either side of the phase transition. . . . . . . . . . . . . . 92 9.2 A summary of results for the quantum Ising model and q = 3 and q = 4 quantum Potts models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 10.1 Leading order computational cost of four PEPS schemes for the J1 -J2 model.116 11.1 Density of global geometric entanglement at the critical point of the quantum Ising model for various values of dimensionality . . . . . . . . . . . . . 134 A.1 The leading order computational costs for the square, hexagonal, Kagome and Triangular lattice infinite-MPS based contraction schemes. . . . . . . . 163 xvii xviii List of Figures 3.1 Example of a simple three-index tensor. . . . . . . . . . . . . . . . . . . . . 7 3.2 Examples of common tensors and their equivalent mathematical form . . . 8 3.3 Example of a tensor permutation . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Example of a tensor reshape . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.5 The process for contracting a simple three-tensor tensor network. . . . . . 11 3.6 An example of the process splitting a tensor . . . . . . . . . . . . . . . . . 13 3.7 Some Common TN Quantum State Operations 3.8 The area law of quantum entanglement for ground states of local Hamiltonians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.9 The representation of a quantum state with coefficients Ci1 i2 ...iN in an MPS form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 . . . . . . . . . . . . . . . 15 3.10 The representation of a quantum state with coefficients Ci1 i2 ...iN in a PEPS form. Here we label a D-dimensional bond index and a d-dimensional physical index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.11 The refinement parameter and the accessibility of the Hilbert space with TNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1 The computation of a local observable of a 1D classical system . . . . . . . 27 xix 4.2 The imaginary-time evolution of a point particle . . . . . . . . . . . . . . . 28 4.3 A 4 × 4 2D tensor network that might represent, for example, a partition function of a 2D classical system at finite temperature . . . . . . . . . . . 29 4.4 An infinite, translationally invariant 2D network defined by the four-legged tensor a. The tensor θ describes the boundary state. . . . . . . . . . . . . 29 4.5 The iTEBD algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.6 The computation of a local observable quantity, M, of an MPS . . . . . . . 31 4.7 The contraction of a 2D tensor network, using the MPS to describe the boundary state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.8 Corner Transfer Matrix basics . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.9 The CTMRG approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.1 Representing 2D systems with a 1D ansatz . . . . . . . . . . . . . . . . . . 39 5.2 The infinite PEPS for the square lattice 5.3 The four basic stages of the PEPS algorithm 5.4 The environment around a tensor link . . . . . . . . . . . . . . . . . . . . . 44 5.5 The computation of a local observable as a 2D tensor network contraction 5.6 The calculation of spatial correlation functions with PEPS. . . . . . . . . . 51 5.7 The expression of the fidelity as a 2D tensor network. Here, the states |ψ0 (λ)# and |ψ0 (λ! )# are translationally invariant and contain tensors A,B and C,D respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.8 The tensor contraction that gives the four-legged tensor containing the coefficients of the two-site reduced density matrix. . . . . . . . . . . . . . . 54 xx . . . . . . . . . . . . . . . . . . . 41 . . . . . . . . . . . . . . . . 42 49 6.1 The process for expressing the partition function as a 2D square tensor network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2 A plot of the magnetization and magnetization error (below) of the 2D classical Ising model for various χ, along with the exact solution . . . . . . 62 6.3 A plot of the magnetization of the 2D classical Ising model for various numbers of boundary state iterations. Note that as the number of iterations increases, we more closely track the exact solution. . . . . . . . . . . . . . 63 6.4 A plot showing the two-point correlation function of the classical Ising model for various χ along with the exact solution . . . . . . . . . . . . . . 64 7.1 Transverse magnetization mx and energy per site e of the quantum Ising model as a function of the transverse magnetic field h . . . . . . . . . . . . 68 7.2 Magnetization mz (λ) of the quantum Ising model as a function of the transverse magnetic field λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 7.3 A comparison of the order parameter for iMPS, CTMRG and the simplified update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 7.4 Two-point correlator Sxx (l) of the quantum Ising model near the critical point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 7.5 Fidelity diagram of the quantum Ising model . . . . . . . . . . . . . . . . . 71 8.1 Particle density ρ(µ), energy per lattice site ((ρ) and condensate fraction ρ0 (ρ) of the HCBH model for a PEPS/TPS with D = 2, 3. . . . . . . . . . 78 8.2 Purity r and entanglement entropy SL as a function of the chemical potential µ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 8.3 Two-point correlation function C(s) of the HCBH model, versus distance s (measured in lattice sites), along a horizontal direction of the lattice . . . 81 8.4 Fidelity per lattice site f (µ1 , µ2 ) for the ground states of the HCBH model xxi 82 8.5 Evolution of the energies $H0 # and $H#, the density ρ, and condensate fraction ρ0 after a translation invariant perturbation V is suddenly added to the Hamiltonian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 9.1 q = 3 Potts model energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 9.2 q = 3 Potts model first derivative of the energy . . . . . . . . . . . . . . . 94 9.3 Plot showing the order parameter of the q = 3 Potts model as a function of external field, λZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 9.4 Plot showing the magnetization of the q = 3 Potts model in the direction of the magnetic field, as a function of λZ . . . . . . . . . . . . . . . . . . . . 96 9.5 Fidelity diagram for the q = 3 Potts model, computed from D = 3 ground states evolved with the simplified update. . . . . . . . . . . . . . . . . . . . 97 9.6 Two point correlation function of the q = 3 Potts model for various values of the external field, λZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 9.7 Entanglement entropy of the q = 3 Potts model . . . . . . . . . . . . . . . 99 9.8 q = 4 Potts model energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 9.9 The first derivative of the energy-peer-site of the q = 4 Potts model with respect to the magnetic field, λZ , as calculated by a finite difference method101 9.10 A plot of the order parameter of the q = 4 Potts model . . . . . . . . . . . 102 9.11 A plot of the magnetization in the direction of the magnetic field for the q = 4 Potts model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 9.12 The fidelity diagram for the q = 4 Potts model, using the simplified update D = 4 ground states. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 9.13 Two point correlation function of the q = 4 Potts model. . . . . . . . . . . 103 9.14 Entanglement entropy for the q = 4 Potts model . . . . . . . . . . . . . . . 104 xxii 10.1 A simple example of frustration . . . . . . . . . . . . . . . . . . . . . . . . 108 10.2 Neel and Collinear ground states. . . . . . . . . . . . . . . . . . . . . . . . 110 10.3 The generally accepted phase diagram for theJ1 -J2 model. . . . . . . . . . 111 10.4 The columnar dimer and plaquette RVB states . . . . . . . . . . . . . . . . 113 10.5 PEPS algorithm variants for the J1 -J2 model . . . . . . . . . . . . . . . . . 114 10.6 A plot comparing the energies given by the four PEPS algorithm variants . 117 10.7 A plot of the energy per lattice site vs JJ21 for various values of D. (inset) Convergence of the energy per lattice site with the bond dimension, D, at J2 = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 J1 10.8 A plot of the Néel order parameter vs J2 J1 for various values of D . . . . . . 119 10.9 A plot of the co-linear order parameter vs J2 J1 for various values of D . . . . 120 10.10A plot of the nearest-neighbour spin-spin expectation values and plaquette order parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 10.11A plot of the entanglement entropy for the J1 -J2 model. . . . . . . . . . . . 124 11.1 The determination of the state µk , expressed as a 2D tensor contraction. . 128 11.2 Hexagonal lattice quantum Ising model: (a) expectation value of the Hamiltonian, and (b) order parameter. . . . . . . . . . . . . . . . . . . . . . . . . 131 11.3 Square lattice quantum Ising model: (a) expectation value of the Hamiltonian, and (b) order parameter . . . . . . . . . . . . . . . . . . . . . . . . . 132 11.4 Square lattice quantum 3-Potts model: (a) expectation value of the Hamiltonian, and (b) order parameter. . . . . . . . . . . . . . . . . . . . . . . . . 133 11.5 (a) Geometric entanglement for the hexagonal lattice quantum Ising model. (b) Geometric entanglement for the square lattice quantum Ising model . . 135 xxiii ! " A.1 The operation of the two-body gates a and b on the boundary MPS !ϕ[0] . 142 A.2 The exact evolution of the boundary state with an MPS may lead to an increase in the MPS bond dimension. . . . . . . . . . . . . . . . . . . . . . 144 A.3 Overview of Lemma 2, part 1 . . . . . . . . . . . . . . . . . . . . . . . . . 145 A.4 Overview of Lemma 2, part 2 . . . . . . . . . . . . . . . . . . . . . . . . . 146 A.5 Overview of Lemma 2, part 3. . . . . . . . . . . . . . . . . . . . . . . . . . 147 A.6 Overview of Lemma 3, part 1. . . . . . . . . . . . . . . . . . . . . . . . . . 148 A.7 Overview of Lemma 3, part 2. . . . . . . . . . . . . . . . . . . . . . . . . . 149 A.8 Overview of Lemma 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 A.9 Overview of Lemma 5, part 1 . . . . . . . . . . . . . . . . . . . . . . . . . 152 A.10 Overview of Lemma 5, part 2. . . . . . . . . . . . . . . . . . . . . . . . . . 153 A.11 The evolution of an MPS under non-unitary gates . . . . . . . . . . . . . . 154 A.12 The computation of the left and right scalar product matrices . . . . . . . 155 A.13 The gates ’a’ and ’b’, formed by contracting the PEPS tensors with their conjugate versions along the physical index. . . . . . . . . . . . . . . . . . 156 A.14 The diagonal contraction scheme for the square lattice . . . . . . . . . . . 157 A.15 The parallel contraction scheme for the square lattice . . . . . . . . . . . . 158 A.16 The computation of the scalar product matrices in the parallel scheme. . . 159 A.17 A contraction scheme for the hexagonal lattice . . . . . . . . . . . . . . . . 160 A.18 A contraction scheme for the Kagome lattice . . . . . . . . . . . . . . . . . 161 A.19 A contraction scheme for the triangular lattice . . . . . . . . . . . . . . . . 162 xxiv B.1 The basic CTM structure for a PEPS with 2 × 2 periodicity . . . . . . . . 166 B.2 Insertion of a 2 × 2 block of sites i) to the left of the existing unit-cell ii) to the right of the existing unit-cell iii) in the middle of the existing unit-cell167 B.3 The absorption process in a horizontal direction . . . . . . . . . . . . . . . 168 B.4 Renormalization of the vertical bonds by the renormalization operators Q and W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 B.5 The stages of the first directional CTMRG algorithm. . . . . . . . . . . . . 171 B.6 The stages of a more stable, but computationally more expensive directional CTMRG algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 C.1 The expression of the distance metric in terms of tensor contractions . . . 176 C.2 The tensor network representation of equation C.2. . . . . . . . . . . . . . 177 C.3 The matrices NA and MA . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 C.4 The splitting of A and B into W, X, Y and Z . . . . . . . . . . . . . . . . 179 xxv xxvi Chapter 1 Introduction Determining the properties of quantum many-body systems is a central challenge in modern physics. Improving our understanding of such systems would catalyze progress in many fields of physics and technology, such as condensed matter physics and quantum field theory, the search for a quantum theory of gravity and the engineering of systems that harness quantum phenomena as a basic resource. However, the complexity of unraveling the properties of quantum many-body systems belies the simple mathematical elegance of the postulates of quantum mechanics. From a very simple microscopic description, one can observe complex, highly correlated collective behaviour. Indeed, it is still a point of contention as to whether such physics can be explained entirely from a microscopic description, or whether emergent physical laws play a part [And72]. Nonetheless, we do not currently have the tools to fully verify the reductionist view of Nature. Even a very fundamental task, such as determining the zero-temperature ground state of a quantum many-body system typically requires classical resources that scale exponentially in the system size. Historically, there have been two major avenues for addressing this problem. In a paper in 1982[Fey82], Richard Feynman suggested that the peculiarities of quantum mechanics could be exploited to simulate quantum phenomena exponentially faster than with a classical Turing machine. Since then, a great deal of effort has been directed at developing a scalable quantum computer to efficiently solve such problems among many others. At present however, a universal quantum simulation device seems on the distant horizon, dogged by the experimental fragility of quantum mechanics and its tendency to decohere[NC00]. In parallel, classical computer technology was advancing exponentially year upon year in accordance with the much fabled Moore’s law. Whilst this did not 1 2 Introduction alleviate the computational intractability of exactly determining (for example) ground states of macroscopically large systems, it did allow the exact diagonalization of progressively larger systems in time. More importantly, it stimulated interest in a variety of methods for approximately calculating ground states of many-body systems, automating the computational work of existing techniques such as perturbation theory and leading to genuinely novel methods such as the density matrix normalization group (DMRG)[Whi92] and quantum Monte Carlo (QMC)[CA80]. Entanglement - the property whereby a measurement on one part of a quantum system affects the measurement outcome of another - was initially a mysterious and controversial aspect of quantum mechanics. In 1935, Einstein, Podolsky and Rosen could not accept that the universe may have a fundamentally non-local interpretation and expressed as much in the so-called EPR paradox[EPR35]. Here, they proposed that quantum mechanics was incomplete, and that the missing physics must be accounted for by a local hidden variable. The theoretical work of Bell[Bel64], carried forward by Clauser, Horne, Shimony and Holt[CHSH69] suggested some simple inequalities that, when tested experimentally, would verify if quantum mechanics obeyed such a local realist picture. The experiments of Aspect[AGR82, ADR82] showed that quantum mechanics violates such inequalities, and as such entanglement - an unmistakably non-classical physical property - was confirmed as a very real part of Nature. Entanglement is at the heart of the complexity of describing quantum many-body physics. Interesting quantum many-body phenomena such as quantum phase transitions[Sac99] are driven by the possibility for many-body states to exhibit strongly correlated behaviour at absolute zero temperature. Entanglement is also of great consequence in the classical simulation of quantum many-body systems for a couple of reasons. On one hand, it can be shown that the presence of entanglement is key to the exponential scaling of the cost of the exact classical simulation of quantum systems[Vid03]. On the other hand, it is known that for a certain class of Hamiltonians, the amount of entanglement in the ground state is limited by well established laws[VLRK03]. The ground state for these systems lives in a low entanglement region of the entire Hilbert space. This means that we can approximate the ground states of these systems by considering a subset of all the degrees of freedom. In quantum Monte Carlo, for example, this is exploited by sampling the Hilbert space. The question naturally arises - is there a representation of quantum states that is systematically confined to a low entanglement region of the entire Hilbert space? In this thesis 3 we examine tensor networks (TNs), a recently developed ansatz for many-body states that is limited in such a way. The earliest example of a tensor network representation of a quantum state is in the Affleck-Kennedy-Lieb-Tasaki (AKLT) ground state [AKLT88]. The authors here identified a spin-1 Hamiltonian chain, where the ground state had many unique properties and which decomposed in such a way that it could be exactly written in terms of symmetrization operators on entangled spin 1/2 pairs (“lattice bonds”). Later, Fannes, Nachtergale and Werner introduced finitely correlated states, a description for 1D quantum systems in which many-body correlations are modeled with a finite dimensional vector space at each lattice bond [FNW92]. This work formalised a representation, later to become known as the matrix product state (MPS), which could potentially capture the low energy properties of one-dimensional (1D) quantum systems, but did not present a general method for obtaining the ground state of a particular Hamiltonian. In parallel, White [Whi92] devised a variational method for obtaining the ground state of the Heisenberg antiferromagnetic spin chain based on using renormalization group[Wil75] methods. This procedure, known as the density matrix renormalization group (DMRG) formulated a method of reducing the degrees of freedom of a quantum system in such a way as to keep the most relevant many-body correlations. The potential for the MPS to be used to simulate quantum dynamics was highlighted when Östlund and Rommer[OR95] demonstrated that the ground states determined by DMRG had an equivalent MPS representation. On this basis, they suggested that these ground states could be determined by a variational treatment of the family of MPS states. Later, whilst investigating the efficient classical simulation of quantum computation, Vidal [Vid03] showed, using the MPS description of a pure quantum state and the circuit model of quantum computation, that entanglement is a necessary resource for the exponential computational speedup of a quantum computer. Elsewhere, Vidal described the time-evolving block decimation (TEBD) algorithm for finding the ground state of a one-dimensional quantum system represented by an MPS and governed by a Hamiltonian with nearest neighbour interactions[Vid04]. In this thesis, we focus on algorithms for finding the ground state of two-dimensional (2D) lattice systems. Here, there is much need for novel efficient computational methods, as analytic techniques are more difficult to devise, and DMRG has struggled to deal with the correlations in systems in higher-than-one dimensions. Moreover, some of the more interesting examples of exotic behaviour in many-body systems, such as topological 4 Introduction order, are only known to exist in 2D systems. The basis for our algorithm is the projected entangled pair states (PEPS) ansatz, introduced in [VC04]. This natural extension of the MPS to quantum systems in two and higher dimensions had also been introduced in the context of 3D classical systems as the tensor product states (TPS) ansatz[NOH+ 00, NHO+ 01, GMN03]. For this reason the literature may use the two terms interchangeably. Our approach here is to firstly develop an understanding of the physical problem in Chapter 2, by defining a standard technique for finding the ground states of quantum systems known as imaginary-time evolution. This method, which lies at the heart of our algorithm, also draws a well-known correspondence between computing the ground state of D-dimensional quantum systems and finding the statistical properties of (D + 1)dimensional classical systems at finite temperature. In Chapter 3, we review the foundational ideas and notations of tensor network theory and motivate their consideration for studying ground states of many-body Hamiltonians. In Chapter 4, we suggest a hierarchical picture of tensor networks in terms of their spatial dimensionality. We show that the problem of finding the ground state in zero spatial dimensions (e.g. a single quantum spin) is trivial for a classical computer, and that from this we can deduce efficient algorithms for approximating the ground state in higher dimensions. In Chapter 5, we describe the properties of the PEPS ansatz, and introduce our algorithm for finding the ground states of infinite, translationally invariant two-dimensional systems. We demonstrate the power and versatility of the ansatz for computing physical quantities of significance. In Chapters 6, 7, 8 and 9 we benchmark the algorithm by applying it to systems for which there is either an exact solution, or there has been extensive study with other numerical methods. In Chapters 10 and 11 we show that the algorithm can be used to tackle problems that are difficult to study with existing algorithms. Chapter 2 The Quantum-Classical Correspondence Consider a classical system made of N sites, where each site can be in one of d different states. Then at any given time, the system is in one of the dN possible configurations. Each configuration σ has an energy Eσ . In thermal equilibrium at temperature T , the probability of finding the system in configuration σ is given by e−βEσ /Z, where Z = # e−βEσ . (2.1) σ is the partition function and β = 1/kT. Thus, computing the partition function involves a summation over a number of configurations, each labeled by a different value of σ, that scales exponentially in the system size. Manipulating the partition function can give us a wealth of information about the statistical properties of the ensemble, such as the average energy of the system or the average magnetization of a magnetic system. So, for a given Hamiltonian, our partition function is a function of temperature, T , and we may monitor how the statistical properties of the system change with varying temperature. In a quantum mechanical setting, quantum fluctuations play an analogous role to those of temperature, such that even at zero temperature the description of the system can be very rich. For a quantum Hamiltonian, HQ , the ground state, |ψ0 #, is the eigenstate of HQ for which the following equation holds: HQ |ψ0 # = E0 |ψ0 # , 5 (2.2) 6 The Quantum-Classical Correspondence where E0 is the lowest eigenvalue of H0 . In a straightforward choice of basis, |ψ0 # may be described by a quantum superposition in this basis. Here, we are interested in observable properties of the ground state. A quantum phase transition (QPT) occurs at zero-temperature where upon changing some Hamiltonian parameter, the qualitative properties of the ground state undergo some change. Computing observables for ground states corresponding to different values of the Hamiltonian parameter, we can build a phase diagram of the system. In particular, for many of the phase transitions we are interested in, the change in phase is captured by a local order parameter that is zero in one phase and non-zero in the other. In practice, we cannot compute the properties of the ground state until we have determined the ground state, and for most Hamiltonians of interest this is a highly non-trivial task. Exact diagonalization becomes intractable for relatively small systems, so we must consider other means of finding the lowest energy eigenstate of the Hamiltonian. A wellknown technique for doing so is imaginary-time evolution. Consider some initial state |ψi #, with non-zero overlap with the actual ground state |ψg #. Then, it is possible to show that evolving |ψi # with the imaginary time evolution operator, e−Hτ , leads to the ground state (up to some normalisation constant) in the limit of infinite imaginary-time, τ , i.e. |ψg # ∝ lim e−Hτ |ψi # τ →∞ (2.3) One can treat this imaginary-time as an extra dimension in the quantum ground-state problem. From this reasoning, a well-known correspondence[Sac99, Hen99] arises. Finding the ground-state of an infinite quantum system in D dimensions is equivalent to calculating the local observable properties of an infinite (D + 1)-dimensional classical system. In this document we are concerned with lattice systems, where the spatial dimensions have been discretized. To complete the above discussion, consider dividing the imaginary-time evolution into steps of size δτ . Now, both the space and time dimensions are discrete, and so the correspondence between finding the ground state of D-dimensional quantum systems and the partition function of (D)-dimensional classical systems also holds for lattice systems. In Chapter 4 we will discuss this at greater length and describe the consequences of this from a computational perspective. Chapter 3 Foundations of Tensor Networks A tensor is a multi-dimensional array of complex coefficients. A collection of such tensors, with legs connected according to some network pattern is called a tensor network (TN). Graphically, we usually denote each tensor by a closed shape with protruding legs (see fig. 3.1). Each leg specifies an index of the array, and the total number of legs determines the order of the tensor. A generic coefficient of the tensor A in fig. 3.1 is Aαβγ , where α, β and γ run from 1 to χα , χβ and χγ . We say that the dimensions of the indices are χα , χβ and χγ and A contains χα χβ χγ complex coefficients. Figure 3.1: Example of a simple three-index tensor. 7 8 Foundations of Tensor Networks Figure 3.2: Examples of common tensors and their equivalent mathematical form, (i) a tensor with no legs is a complex number A. (ii) A tensor with one leg is a vector of coefficients Aα . (iii) A tensor with two legs is a matrix with elements Aαβ . Some special cases of tensors are shown in figure 3.2. One can see that a tensor with no legs is a complex number, a tensor with a single leg is a vector and a tensor with two legs is a matrix. Since such structures are easily dealt with in mathematical software we will see that being able to manipulate parts of our tensor network into such structures is extremely important. 3.1 3.1.1 Basic tensor operations Permutation A permutation of a tensor reorders the coefficients of a tensor according to some reordering of the tensor legs. In fig. 3.3 we depict the tensor A permuted into the tensor Ã, where each coefficient of à is determined according to the assignment Ãβαγ = Aαβγ . 3.1.2 Reshaping A reshape operation takes two or more legs of a tensor and joins them into a single index, thus reducing the order of the tensor whilst keeping the number of coefficients unchanged. This means that the index of the new leg runs over the product of the indices of the constituent legs. A simple example is shown in figure 3.4. Here, we have joined 3.1 Basic tensor operations 9 Figure 3.3: Example of a tensor permutation, taking the tensor A with coefficients Aαβγ and returning the tensor à with coefficients Ãβαγ . Figure 3.4: Example of a tensor reshape, taking the tensor A with coefficients Aαβγ and returning the tensor à with coefficients Ãαδ . two legs of a tensor A to create a new index δ of dimension χβ χγ . The leg corresponding to the index α is left untouched. The result is a new tensor à with only two legs, and coefficients Ãαδ . Often, we wish to join indices together and later on recover the original indices. A useful notation then is to express the coefficients as Ãα(βγ) where the brackets indicate that the index has been joined. 10 3.1.3 Foundations of Tensor Networks Tensor Multiplication A tensor multiplication is defined as an inner product over an index that is shared between two tensors. That is, for two tensors A and B with coefficients Aαβγ and Bβρ , a $ tensor multiplication obtains a new tensor C with coefficients Cαγρ = Aαβγ Bβρ . Such β a process may also be called the contraction of the shared index β. In practice, mathematical software supports matrix multiplication and so we perform tensor multiplication by succession of permutation, reshape and matrix multiplication operations. Often, we want to contract many indices in a tensor network shared by many tensors. We do this by performing a series of tensor multiplications. As a shorthand, we will usually call the contraction of all the shared indices in a network the contraction of the tensor network. We show in figure 3.5 a simple three-body tensor network contraction broken down into its various stages. Here, the subscript indices for each tensor specify the ordering of the indices. The stages in this network contraction can be described as follows. Firstly (i), we select a pair of connected tensors to multiply. Here, we choose A and C, which share the index labeled γ. The choice of order of multiplication does not affect the result, but is normally made on consideration of computational cost. Next (ii), we permute the tensor A → à such that the shared index, γ appears last in Ã’s its list of indices. Since γ appears first in C’s index order, no permutation of C is required. Then (iii), we reshape à and C into matrices  and Ĉ by joining the indices α, β and δ, and the indices ( and ρ. Then we contract the tensors by means of matrix multiplication, returning a new tensor O. i.e. O(αβδ)(ερ) = $ γ O = ÂĈ Â(αβδ)γ Ĉγ(ερ) (3.1) Then (iv), we reshape O so that the non-contracted indices re-appear. Steps vi-ix closely mirror steps ii-v. 3.1 Basic tensor operations Figure 3.5: The process for contracting a simple three-tensor tensor network. 11 12 3.1.4 Foundations of Tensor Networks Computational Cost Considerations of Tensor Network Contractions From the above description, it should be obvious that a generic N -body network contraction can be broken down into N − 1 two-body tensor multiplications, each of which contain permutation, reshape and matrix multiplication operations. It should also be noted that there are two computational considerations. The computational cost of each step is the number of computational cycles required to multiply two tensors. This is the sum total of cycles needed for the permute, reshape and matrix multiplication stages and is usually dominated by the last stage. The cost of multiplying a matrix Aαβ and Bβγ scales as χα χβ χγ . The second computational consideration is the memory requirement, which is the maximum amount of computer memory required to store the current state of the tensor network at any stage of the contraction. For the same multiplication of A and B, the memory requirement is χα χβ + χβ χγ + χα χγ . For the contraction of a many-tensor network, it can also be seen that both of these costs depend on the order of contraction. In general, the order of contraction that optimises the computational cost may not optimise the memory requirement and vice versa. Since it is usually preferable to minimise the total time required for a contraction, the contraction with minimal computational cost is usually chosen. However, there are instances when the memory requirement becomes so large that the computer’s memory resources are exhausted and the operating system needs to store and retrieve information from the hard disk. Hard disk access is orders of magnitude slower than RAM access and as such this can severely affect the time involved in contracting the network. So the best contraction order is the one that minimises the computational cost without exceeding a particular machine’s RAM resources. 3.1.5 Splitting Splitting of a tensor involves making a decomposition into an equivalent many-tensor form. We can choose whichever matrix decomposition we wish to split a tensor, so long as the original tensor is recovered when we contract the resulting split tensors. Typically however, we will make use of either a singular value decomposition (SVD) or an eigenvalue decomposition (ED). For instance, in figure 3.6, we show a tensor A decomposed into three tensors, B, λ and C, which when contracted together reproduce A. Much like a contraction, splitting a tensor involves permute and reshape operations to put the tensor in matrix form. For the SVD or ED, the dimension of the indices δ and δ ! reflect the rank 3.1 Basic tensor operations 13 Figure 3.6: An example of the process splitting a tensor of the matrix. The open indices of A are recovered in the B and C tensors by reshape operations. 3.1.6 Truncation Each leg of a tensor describes a vector space addressed by an index. We have seen that the cost of storing and manipulating tensor networks depends greatly on the dimension of the tensor legs. In order to keep such costs manageable, we may sometimes reduce the dimension of the shared bonds of a tensor network, by projecting out tensor coefficients corresponding to particular values of a given index. This operation is known as truncation. 14 3.2 Foundations of Tensor Networks Tensor Network Descriptions of Quantum States Mathematically, pure quantum states can be described by a vector in Hilbert space. An N -body quantum system is described by a Hilbert space that is a tensor product of N local Hilbert spaces, each spanned by an orthonormal basis of dimension d, and indexed by i. This means we can express any (pure) N -body quantum state |ψN # by the following expansion # |ψN # = Ci1 i2 i3 ...iN |i1 i2 i3 .....iN #, (3.2) i1 i2 i3 ...iN where i1 , i2 , ...., iN span the degrees of freedom within each d dimensional local system. Note that the number of coefficients, C, scales exponentially in N , and this is the difficulty in the well-known quantum many-body problem. Exactly representing and computing physical characteristics of the N -body quantum state is computationally hard. For now, one can see that the coefficients C may be stored in a tensor, C, with N legs corresponding to the local degrees of freedom. Figure 3.7i) shows a graphical representation of a tensor corresponding to the state |ψN #. We can also represent basic operations on such a state (figures 3.7ii)-(v)). To this point, we have merely shown that states can be represented by tensors and that we can perform basic operations on these states by way of tensor contraction. We also know that these representations and operations demand computational resources scaling exponentially in N . The naive approach of representing generic quantum states with tensor networks does not offer us anything with which to tackle the many-body problem. What we need to know is if there are ways we can use tensor networks to describe a subset of quantum states, which can be efficiently simulated on a classical computer. 3.2.1 Entanglement in Tensor Networks We say that a pure N-body state, |ψprod #, is a product state if it can be expressed in a tensor product form, i.e. |ψprod # = |ϕ1 # ⊗ |ϕ2 # ⊗ |ϕ3 # ⊗ ..... ⊗ |ϕN # (3.3) for some |ϕk #. Otherwise, we say that the state is entangled, i.e. |ψentangled # = ) |ϕ1 # ⊗ |ϕ2 # ⊗ |ϕ3 # ⊗ ..... ⊗ |ϕN # (3.4) 3.2 Tensor Network Descriptions of Quantum States Figure 3.7: Some Common TN Quantum State Operations 15 16 Foundations of Tensor Networks One can see that a product state may be represented as a tensor network with N tensors, each representing a subsystem |ϕk #. Since the state is given by a product of such subsystems, the tensors are trivially connected by an index of dimension χ = 1. The role then of non-trivial interconnections in a tensor network is to represent entanglement between subsystems. Product states have been historically important in approximating quantum many-body systems. The well known mean-field (MF) approach to quantum many-body ground states finds the product state with the lowest energy. The number of coefficients in a product state description scales linearly in the number of particles, so the mean-field theory solution is extremely compact. However, we will see that quantum entanglement is a crucial ingredient when describing quantum phase transitions, and as such mean-field theory solutions can obtain results that agree poorly with observed behaviour in many situations. The Schmidt Decomposition The Schmidt decomposition is a bipartite representation of a quantum state. Given a pure state, |ψ#, of an N -site system, the Schmidt decomposition is given by, |ψ# = χ # i ! " ! B" !ϕi λi !ϕA i (3.5) where the system has been divided into two subsystems, A and B. Here, the state is ! " described by a sum over tensor products of the orthonormal Schmidt bases !ϕA and ! B" ! A" i ! B " !ϕi , each multiplied by a Schmidt coefficient λi . The Schmidt bases !ϕi and !ϕi span some auxiliary spaces, which represent the subsystems A and B. Each basis is orthonormal and in this respect the representation is maximally compact. The dimension of the auxiliary space, χ is known as the Schmidt rank. If there is no entanglement between the subsystems A and B (which may themselves be internally entangled), then ! "! " |ψ# = !ϕA !ϕB and χ = 1. On the other hand, if χ > 1, it means the two subsystems are entangled with one another. The Schmidt decomposition arises in many contexts in quantum information theory. In one-dimensional systems, it is often used to determine the amount of entanglement between the ‘left’ and ‘right’ halves of a chain. Generally, in any number of spatial dimensions, the Schmidt rank gauges the amount of entanglement between a block of sites and the rest of the system. 3.2 Tensor Network Descriptions of Quantum States 17 Measures of Entanglement The Schmidt rank is a discrete measure of entanglement. It reflects only the number of non-zero Schmidt coefficients, and not their relative weights. It is obvious that bipartite state with roughly comparable Schmidt coefficients is further from a product state than a state with one Schmidt coefficient of significantly greater magnitude than the others. To reflect this, there exist alternative entanglement measures and we briefly introduce these here. Entanglement Entropy The entanglement entropy (or Von Neumann Entropy) of a block of sites A is given by SA = −tr (ρA log ρA ) (3.6) Where ρA ≡ tr (|ψ# $ψ|) is the reduced density matrix of the sites A. Here, a value ∈A / of SA = 0 determines that the block A is not entangled with the rest of the system, also known as the environment. A non-zero value means that A is entangled with the environment and within this definition, as SA increases the block A is said to be more entangled with the environment. Bloch Vector Magnitude For a spin-1/2 system, we can decompose the reduced density matrix for a single site, i in the following way, I + /ri · /σ ρ= (3.7) 2 where /σ is the vector sum of the spin-1/2 Pauli operators. The vector /ri is known as the Bloch vector and its magnitude, the Bloch vector magnitude or purity of a single site, captures the entanglement between the site i and the environment. A purity of 1 signifies that the site is not entangled with the environment. The amount of entanglement between the site and the environment is said to increase with decreasing purity. Geometric Entanglement Whilst continuous, the entanglement entropy and purity are still bipartite in that they define a block containing a site or sites and calculate the entanglement between this block and the rest of the system. A non-bipartite entanglement measure is the geometric 18 Foundations of Tensor Networks entanglement. Here, we attempt to find the product state, |ΦGE # from the set of all product states, Sprod , that maximally overlaps with our state, |Ψ0 #. We can define such an overlap as the fidelity, Λmax (Ψ0 ) ≡ |$ΦGE |Ψ0 #| = max |$Φ|Ψ0 #|. Φ∈Sprod (3.8) In the thermodynamic limit, the quantity $Φ|Ψ0 # can decay rapidly to zero. As a simple example, consider that both |Ψ0 # and |Φ# are normalised, translationally invariant, Nbody product states, i.e. |Ψ0 # = |ψ#⊗N , |Φ# = |φ#⊗N (3.9) $Ψ0 | Ψ0 # = $Φ | Φ# = 1 Then, the overlap is given as |$Φ | Ψ0 #| = |$φ | ψ#|N = αN , (3.10) where the per-site overlap 0 ≤ α ≤ 1 describes how close the two product states are locally. In the thermodynamic limit, N → ∞, the fidelity exponentially decays to zero for all α < 1. This problem, whereby we cannot distinguish between states that are very similar (α ≈ 1) or very different (α ≈ 0) in a local sense, can easily arise for general, entangled states |Ψ0 #. For this reason, the authors of [WDM+ 05] prescribed the following intensive quantity. E(Ψ0 ) ≡ − log Λ2max (Ψ0 ) , N (3.11) A geometric entanglement, E(Ψ0 ) = 0 occurs if |Ψ0 # is a product state and the amount of entanglement in the system is said to increase with increasing E(Ψ0 ). 3.2.2 Area Laws and Quantum Lattice Systems We have seen from the Schmidt decomposition that entanglement can determine the cost of representing quantum states. We also know that in a tensor network, the shared bonds facilitate representation of entangled states. Therefore, in being able to efficiently represent quantum states with tensor networks it is important to understand how much entanglement is present in the ground states we will study. We consider Hamiltonians that are local in nature. In this document, we only consider those with nearestneighbour or next-nearest-neighbour terms. It has been shown that in many circumstances, the entanglement entropy of the ground states of such Hamiltonians obeys an 3.2 Tensor Network Descriptions of Quantum States 19 Figure 3.8: The area law of quantum entanglement for ground states of local Hamiltonians. (i) For a block of L sites in a non-critical 1D system, the entanglement entropy is independent of L. Critical 1D systems attain a logarithmic correction to this scaling. (ii) For a L × L block of sites in a non-critical 2D system, the entanglement entropy scales with L. Critical systems may or may not attain a logarithmic correction. area law [VLRK03, PEDC05, CEPD06]. More precisely, if we take a contiguous block of sites in a D-dimensional system, then the entanglement entropy of this block is said to scale with the size of the (D −1)-dimensional boundary of the block. We show such blocks for 1D and 2D lattices in fig. 3.8. In 1D, this means that the entanglement entropy of a length-L block of sites is independent of L. In 2D, the entanglement entropy of an L × L block scales with L. The exception to this rule is critical systems, where we sometimes incur a multiplicative logarithmic correction to the area law. In 1D, the entanglement entropy of a block of L-sites in a ground state at criticality scales as log(L). Remarkably, in 2D it is understood that it is only some exotic systems [Wol06, GK06, SMF09] that violate the area law and incur a logarithmic correction. For tensor network representations of ground states, we will see that this has implications for the dimension of the tensor bonds. 20 Foundations of Tensor Networks Figure 3.9: The representation of a quantum state with coefficients Ci1 i2 ...iN in an MPS form. 3.2.3 Tensor Networks for Quantum Ground States We now briefly introduce some tensor network structures for representing ground states in one and two dimensions. In 1D, the Matrix Product State (MPS) assigns to each lattice site a tensor. The tensors at each end of the chain have an open physical index of dimension d and a shared bond index. The remaining tensors have a physical index and two bond indices. Consider a quantum state expanded in a local basis, i, ## # |ψ# = ... Ci1 i2 ...iN |i1 i2 ...iN # (3.12) i1 i2 iN The MPS representation is to represent the coefficients, C, as follows, Ci1 i2 ...iN = χα χβ # # α β ... χρ # Γ1αi1 λ1αα Γ2αβi2 λ2ββ ...ΓN ρiN (3.13) ρ The tensors Γ are the site tensors, and the indices α, β, ...., ρ are the bond indices. In this instance, we have diagonal weight matrices λ which may contain the Schmidt coefficients, but we could also choose a form without such weights as in the original MPS formalism [FNW92]. The coefficients C are obtained by contracting all of the shared indices. Graphically, we may represent the expansion of the state in an MPS form in figure 3.9. Projected entangled pair states (PEPS)[VC04] are a natural extension of the MPS to two and higher dimensions. The basic principle of the MPS is retained - we represent our state with a contractible tensor network with tensors at each site. Contracting the PEPS network once again returns the coefficients of the wave vector expanded in the local basis. We show a PEPS in figure 3.10. 3.3 Efficient Representations of Quantum States 21 Figure 3.10: The representation of a quantum state with coefficients Ci1 i2 ...iN in a PEPS form. Here we label a D-dimensional bond index and a d-dimensional physical index. A similar ansatz, called tensor product states (TPS)[NOH+ 00, NHO+ 01, GMN03], had been employed to compute statistical properties of 3D classical systems at finite temperature. In this document we will use the term PEPS to describe quantum states on 2D lattices, however the reader should note that generally, the terms PEPS and TPS are interchangeable. 3.3 Efficient Representations of Quantum States An efficient representation of a quantum state satisfies two main properties, 1. It may be stored with a number of coefficients that grows polynomially in the number of sites. 2. We can compute, to at least a good approximation, basic properties of the state and perform basic operations on the state with a computational cost that scales polynomially in the number of sites. For example, we want to be able to efficiently compute local observables, local entanglement measures and simulate the action of local gates in quantum circuits. We will see in the following chapters that we satisfy these by bounding the dimension of the shared indices in the MPS and PEPS to some upper limit. For an MPS we term 22 Foundations of Tensor Networks Figure 3.11: The refinement parameter controls the size of the corner of the entire Hilbert space that is accessible in the tensor network representation. A higher value of refinement parameter means more states are accessible, but the representation is also less compact. this refinement parameter χ and for a PEPS, D. This means we constrain our state to a low-entanglement region of the entire Hilbert space. So, whilst states of quantum many-body systems lie in a Hilbert space that grows exponentially in the system size, ground states of certain Hamiltonians lie in a region of this Hilbert space where there is relatively little entanglement. Meanwhile, with tensor networks, we can efficiently represent corners of the entire Hilbert space. The size of the corner depends on the value of the refinement parameter - as we increase the refinement parameter, we gain access to a larger region of Hilbert space, each one capable of describing more entangled states and a better approximation to the real ground state than the last (see fig. 3.11). However, in comparison to the size of the entire Hilbert space, these corners are still exponentially small. The effectiveness of TNs for representing quantum ground states depends on there being a good overlap between the region of the entire Hilbert space containing the ground states 3.3 Efficient Representations of Quantum States 23 and those accessed by TNs for manageably small values of the refinement parameter. It was shown in [TdOIL08, VC06] that in order to represent ground states of 1D systems at criticality, we only require a χ that grows polynomially in the system size. This may appear daunting, but it is significantly more favourable than an exponential growth of the Hilbert space. On the other hand, we can represent a non-critical ground state with finite χ. For infinite 2D systems, it is elementary to show an example of a critical ground state that can be written in PEPS form[VWPGC06]. For us, these form a basis for our approach for representing quantum systems with tensor networks. With a fixed χ or D we can well approximate the ground state. This representation is more accurate when the amount of entanglement in the system is relatively small, and less accurate when a large amount of entanglement is present, as, for example, at criticality. 24 Foundations of Tensor Networks Chapter 4 Dimension and Computational Complexity In Chapter 2 we established a correspondence in computational complexity for determining the local statistical properties of a (D + 1)-dimensional classical system at finite temperature and local observable properties of the ground state of a D-dimensional quantum system. In this chapter we further develop these ideas in the context of tensor network representations described in Chapter 3. 4.1 D=0 We firstly consider the case D = 0. This relates to calculating the local properties of a 1D classical system at finite temperature or the local observable properties of the ground state of a single quantum particle. We will assume that our classical system is translationally invariant, and that our quantum Hamiltonian is time-invariant. Consider a classical spin system where we want to measure the expectation value of the ith spin. That is, we wish to compute, $ $ Si e−βH(σ) Si e−βEσ σ σ $Si # = = (4.1) Z Z Here, Si ∈ ±1 is the spin at site i, each σ is a configuration of the system and Z is the partition function as defined in equation 2.1. Now consider a single quantum spin, governed by a Hamiltonian, HQ . We wish to calculate the expectation value of the spin in the z-direction of the ground state. That is, 25 26 Dimension and Computational Complexity $σz # = $ψgr | σz |ψgr # $ψgr | ψgr # † $ψi | e−H τ σz e−Hτ |ψi # = lim † τ →∞ $ψi | e−H τ e−Hτ |ψi # $ψi | e−Hτ σz e−Hτ |ψi # = lim τ →∞ $ψi | e−Hτ e−Hτ |ψi # (4.2) Here we have used the technique of imaginary time evolution explained in Chapter 2 to obtain the ground state, |ψgr #, from some random initial state |ψi #. In each case, we have a numerator containing some unnormalised physical information and a denominator that acts as the normalisation constant. In the classical case, the normalisation constant is the partition function, Z, whilst for the quantum particle, it is the norm of the unnormalised ground state. A standard way to solve the classical problem is to put the numerator and denominator of equation 4.1 in transfer matrix form. This means finding the matrices Tk,k+1 , such that ' ' %N −1 % i−1 & # & (4.3) Tk,k+1 vN Si e−βHσ = v1T Tk,k+1 Θi σ k=i k=1 here, v1 = [1 1 ... 1] and vN = [1 1 ... 1]T . The operator Θi ensures that the correct multiplicative constant, Si , is applied to each term in the summation in the numerator. Likewise, the normalising partition function can be expressed as, ' %N −1 # & (4.4) Z= e−βEσ = v1T Tk,k+1 vN σ k=1 Graphically, we represent the calculation of $Si # in fig. 4.1. From this, we conclude 1. Equation 4.3 and equation 4.4 each require order N matrix-vector multiplications. That is, the computational complexity scales linearly in the system size. 2. The two computations differ only by the insertion of a local operator Θi . The remainder of the computation is common to each expression. The consequence of the first point is that computing a local statistical property of a 1D classical system at finite temperature is an elementary task for a classical computer to 4.2 D = 1 27 Figure 4.1: The computation of the partition function of a 1D classical system. The matrices T are known as transfer matrices perform. A consequence of the second point is that computing such a local property is no more complex than evaluating the partition function. As a result, we sometimes talk synonymously about the computational complexity of computing local properties of the classical system and the computational complexity of evaluating the partition function. If the system is infinite and translationally invariant, then our transfer matrix Tk,k+1 is identical for all k. We realise that in order to compute local properties, we must first compute the left and right eigenvectors of T corresponding to the maximum magnitude eigenvalue. We term each eigenvector the dominant eigenvector. This can be determined by diagonalising T , or simply performing many matrix-vector multiplications and monitoring for convergence. We make the direct correspondence between this 1D classical problem and computing local observable properties of the ground state of the quantum particle (equation 4.2) by dividing the imaginary time evolution into steps of length δτ . As shown in the tensor network in figure 4.2, our quantum problem contains imaginary-time evolution operators, e−HQ δτ in place of the transfer matrices. Otherwise, the two problems are computationally equivalent and are both represented by the contraction of a 1D tensor network. 4.2 D=1 We now consider the case of computing the local properties of a 2D classical system or the ground state of a 1D quantum chain. These involve the contraction of a 2D tensor network. The 4 × 4 tensor network in figure 4.3i could represent, for example, the partition function of a 2D classical system. We need to be able to contract such networks 28 Dimension and Computational Complexity Figure 4.2: The imaginary time evolution of a point particle (0-dimensional) governed by the Hamiltonian, H. The time dimension has been discretized into steps of δt to evaluate the local properties of the system. One possible way of contracting such a network is to isolate a 1D boundary of the system and evolve it under the action of an adjacent row or column of operators. For instance, we can take the top row of tensors R11 , R12 , R13 , R14 and calculate the action of the row R21 , R22 , R23 , R24 on it. One then determines a new boundary state that represents two rows of the lattice, as described by the tensors B1 , B2 , B3 and B4 in figure 4.3ii. The problem with proceeding in such a way is that it is exponentially hard to exactly simulate the evolution of the boundary state - at each step, the horizontal links in the boundary state need to expand to exactly represent all of the correlations. From this, we can appreciate that contracting 2D tensor networks is not as simple for a classical computer as contracting 1D networks. We need to think of ways to approximate 2D contractions in a computationally efficient manner. For an infinite 2D tensor network with open boundary conditions the problem of contracting the network seems even more daunting. In fig. 4.4 we translate the approach for a translationally invariant, infinite 1D tensor network to a translationally invariant, infinite 2D tensor network. That is, we describe the boundary state by a single tensor, θ and treat rows of the tensor network as transfer matrices acting on θ. After evolving for many iterations, θ converges to the eigenvector corresponding to the maximum eigenvalue of transfer matrix. 4.2.1 The Infinite MPS approach and iTEBD algorithm Such an approach for infinite systems presents an obvious computational problem. The boundary state has infinitely many degrees of freedom making storing and manipulating 4.2 D = 1 29 Figure 4.3: i)A 4 × 4 2D tensor network that might represent, for example, a partition function of a 2D classical system at finite temperature. ii) A straightforward approach to contracting the network by starting with a boundary and evolving it leads to an accumulation of indices. As such, the computational cost of contracting an L × L network in such a way scales exponentially in L. Figure 4.4: An infinite, translationally invariant 2D network defined by the four-legged tensor a. The tensor θ describes the boundary state. 30 Dimension and Computational Complexity it in the form of a single tensor, θ, computationally intractable. So the question is how to approximately represent the boundary state and its evolution in a computationally efficient way, exploiting the translational invariance of the lattice. In doing this, we take inspiration from the 1D quantum mechanical analogue to this problem. Finding the ground-state of a 1D quantum chain is in general computationally hard due to the presence of quantum entanglement. The density matrix renormalization group (DMRG) algorithm [Whi92] described a means of finding an approximation to the ground state in a low-entanglement region of the Hilbert space. For infinite translationally invariant 1D systems, it was shown that the ground states found by DMRG could be determined by a variational minimization of the energy over the family of MPS states [OR95, VPC04]. Later, methods were developed to find such an approximation within an MPS representation[Vid04] using imaginary-time evolution to converge to the ground state. The infinite time-evolving block decimation (iTEBD) algorithm approximated the imaginary-time evolution by a network of near-unitary two-body gates. These were obtained from a Suzuki-Trotter decomposition of the imaginary time evolution operator for an interval δτ . An example of the resulting network is shown in figure 4.5. Starting with an MPS of maximum bond dimension χ, rows of time evolution gates are successively applied. After application, the bonds in the MPS are truncated so that the description of the state remains compact. In examining the iTEBD algorithm more thoroughly (see [Vid04]), one can see that the each iteration consists of SVD operations on tensors, followed by truncation of tensor legs. In the case of the iTEBD algorithm, each step scales as χ3 d3 , where d is the physical dimension. Having obtained an MPS representation of the ground state, then computing the local observable properties of the state is done by contracting the network shown in figure 4.6. Such a computation falls in the D = 0 class of problems and scales polynomially in the bond dimension χ. This means that we can both find an approximate representation ground state of a 1D quantum Hamiltonian and compute its properties in a computationally efficient manner. Now, return to the problem of contracting the infinite, translationally invariant 2D tensor network, this time with an infinite, translationally invariant MPS approximating the boundary state in some diagonal orientation (see figure 4.7). We can express the evolution of the boundary state in precisely the same schematic form as the iTEBD algorithm. The 4.2 D = 1 31 Figure 4.5: The imaginary-time evolution of a translationally invariant MPS as defined by the iTEBD approach. The row of G gates collectively approximates the evolution for some time, δτ . Figure 4.6: The computation of a local observable quantity, M, of an MPS. Note that the operation is of the D = 0 class. 32 Dimension and Computational Complexity Figure 4.7: The contraction of a 2D tensor network, using the MPS to describe the boundary state difference is the gate characteristics - in one case the contraction of the gate network approximates an imaginary-time evolution and in the other it constitutes, for example, the evaluation of the partition function. An important distinction between the iTEBD algorithm for 1D quantum systems and for the contraction of generic 2D tensor networks is the near-unitary nature of the imaginarytime evolution gates. As explained in [Vid04], this means we can rather naively truncate the MPS, as the near-unitary gates keep the MPS close to the so-called canonical form. For the 2D network we must perform some additional operations before truncation (see [OV08] or Appendix A). However, since these operations are either local decompositions, or network contractions of the D = 0 class, our algorithm remains efficient. 4.2.2 The Corner Transfer Matrix An alternative means of contracting a 2D tensor network is the corner transfer matrix (CTM)[Bax82]. The basic structure of the CTM consists of a unit cell, surrounded by corner matrices and edge tensors (fig. 4.8b), each of which characterise a certain part of the environment around the unit-cell (fig. 4.8a). The tensors are connected by bonds of dimension χ (fig. 4.8c) and thus our description is a reduced-rank representation of the environment. The corner transfer matrix renormalization group (CTMRG)[NO96] is an algorithm for determining the tensors C1 , C2 , C3 and C4 and E1 , E2 , E3 and E4 . We cover the details 4.3 D = 2 33 of an implementation of a similar algorithm extensively in Appendix B. Here, we merely wish to convey the basic principle behind it. The CTMRG proceeds by absorbing part of the unit-cell into the environment. Here, we absorb the unit-cell A and two edge tensors into the corner C1 . Thinking about how this simple step aggregates the correlations in the system illustrates the idea behind the CTMRG. In figure 4.9a), we represent the combined action of A and the edge tensors by the L-shaped block L. If we continued this process, we would obtain an increasingly larger corner matrix. After k steps, we would have a corner matrix with k +1 horizontal legs and k +1 vertical legs, each of dimension D and a total of D2k+1 coefficients. Proceeding in such a fashion is obviously computationally inefficient. However, what if the state we are considering has some limited amount of many-body correlations present, such that after k-steps the unit-cell is only correlated with the last several legs added to the corner matrix? In fig. 4.9a, this means that the red index is not correlated with the blue indices. In this case, maintaining all D2k+1 coefficients is excessive. The CTMRG algorithm accounts for this by renormalizing the bonds after each step, such that the aggregated information of all the legs is approximately transmitted in a bond of maximum dimension χ. In this way, we are trying to find a fixed point of the system as depicted in figure (fig. 4.9b), where the renormalization operators V and W are determined in some systematic way. In Appendix B, we describe algorithms for finding these renormalization operators using simple, local contractions or decompositions of tensors. 4.3 D=2 So far, it has been seen that we solve problems of the D = 1 class by approximately contracting 2D tensor networks. Such an approach breaks the problem into a series of D = 0 matrix-vector eigenproblems and simple local operations. We do this because it alleviates the exponential computational cost scaling of the D = 1 problem, and because mathematical software is particularly adept at performing these operations. The philosophy for D = 2 extends upon this. For example, in order to find the ground state of a 2D quantum system, or evaluate the partition function of a 3D classical system at finite temperature, we firstly break the problem down into a series of D = 1 problems and local decompositions. Having done this, we can approximately solve the D = 1 problems - by methods such as the infinite boundary MPS or the CTMRG. In this reductionist manner, we can approximately solve D = 2 problems. In the next chapter, we outline the iPEPS algorithm - a scheme for computing the properties of the ground state of 2D quantum 34 Dimension and Computational Complexity Figure 4.8: a) The CTM structure, with the regions surrounding a single-site unit-cell each allocated a tensor. b) The resulting CTM structure, showing four corner matrices and four edge tensors. c) A single edge tensor, with bond indices of dimension χ. systems that is based on these very principles. 4.3 D = 2 35 Figure 4.9: a) A visualization of the basic CTM operation. After each step, the number of indices on the corner transfer matrix increases, however if some of the indices are uncorrelated, we only need keep a subset of all of the correlations. b) The CTMRG fixed-point problem. Our task is to find the rank-reducing matrices V and W. 36 Dimension and Computational Complexity Chapter 5 Projected Entangled Pair States In this chapter, we describe the PEPS ansatz in more detail and then develop an algorithm for finding the ground state of infinite 2D quantum systems described by local Hamiltonians. Our intention is to avoid discussion of the low level algorithmic details, instead deferring these to the appendix. Here, we will discuss the algorithm in terms of the basic notions introduced in Chapter 4. By now we have a clear understanding that 1D tensor network structures (D = 0) are easily dealt with on a classical computer, and that 2D tensor network contractions (D = 1) can be approximated by a series of one-dimensional contractions and some simple local operations. In this chapter we show that 3D tensor network contractions (D = 2) can be decomposed in a similar way. We will focus here on the computation of ground states of 2D quantum systems. 5.1 Projected Entangled Pair States We briefly introduced the PEPS/TPS ansatz [VC04, NOH+ 00] in Chapter 3. Historically, the great success of DMRG and the MPS in efficiently solving 1D quantum problems meant that this approach appeared an obvious candidate for solving problems in two spatial dimensions. It was proposed that one could choose a basis ordering that ‘snakes’ through the lattice and then solve the resulting energy minimization on the 1D ansatz e.g. [WC07, WS98]. We show an example of such an ordering in fig. 5.1i). When such a 2D state is represented by a 1D ansatz, one can see in fig. 5.1ii) that the nearest-neighbour interactions become long-ranged in the 1D picture. Recall the reason underpinning the success of the iTEBD and DMRG algorithms - that the entanglement entropy grew at worst logarithmically with the block size in 1D systems - was based on the Hamiltonian 37 38 Projected Entangled Pair States having local interactions. On this basis, we might suspect that it is only for relatively small lattices that the ground state of a 2D system will be well described by an MPS with a reasonable, fixed bond dimension χ. Of course, we already know from the area law that 1D and 2D systems with nearestneighbour interactions have fundamentally different entropy scaling behaviour and we can use this to formalise our argument against using a 1D ansatz to represent 2D ground states. Recapping our discussion in Chapter 3, for a 1D system, the entanglement entropy is independent of the block size off-criticality, and scales logarithmically with block size at criticality. For a 2D system, the entanglement entropy scales with the perimeter of the block for non-critical and some critical ground states. For other critical ground states the entanglement entropy acquires the multiplicative logarithmic correction. It was also shown by Vidal[Vid03] that for a tensor network with fixed bond dimension, the maximum entanglement entropy for a contiguous block of sites is proportional to the number of tensor bonds crossing the boundary. In fig. 5.1iii) we demonstrate that for an MPS representation of a 2D system, as we increase the perimeter of a block of sites (black box to red box) the number of bonds crossing the boundary stays fixed. That is, the size of the boundary increases, but the maximum entanglement entropy of the larger and smaller blocks (for fixed χ) is identical. This shows that 1D schemes such as the MPS and DMRG violate the area law and are unsuitable for capturing the physics of strongly correlated 2D systems. As a result of this realization, a general tensor network ansatz for quantum states in two and higher dimensions, in which all neighbouring sites were connected by correlation carrying bonds, became an area of great interest. The resulting PEPS ansatz [VC04] was shown to have some remarkable properties [VWPGC06]. Most notably, these are: 1. Quantum states expressed as a PEPS are capable of obeying the area law. 2. Every PEPS represents the ground state of some local Hamiltonian. Combined, these suggest that the PEPS is a powerful ansatz for representing quantum ground states in two or more dimensions. The authors of [VC04] also presented a variational algorithm for finding a PEPS representation of the ground state of finite systems. However, finding the ground state of infinite 2D systems required additional insight. In the next sections, we take a look at this problem, motivated by the iTEBD algorithm for the MPS. 5.2 Problem Overview 39 Figure 5.1: Representing 2D systems with a 1D ansatz. (i) A possible ordering for the basis of a 4 × 4 2D system with nearest-neighbour interactions. (ii) When linearised onto a 1D ansatz, some of the nearest-neighbour interactions in the 2D system become longranged. (iii) For a 2D system described by a 1D ansatz, as the perimeter of a block of sites increases, the number of bonds crossing the perimeter may stay constant. Following the reasoning in [Vid03], the maximum entanglement entropy of the two blocks remains the same. As a result, 1D schemes cannot describe 2D states that obey the area law. 5.2 Problem Overview In this thesis, we are interested in finding the ground states of infinite, translationally invariant 2D systems. That is, our Hamiltonian will be invariant under some shift of lattice sites. For the purpose of this discussion, we will assume that the Hamiltonian contains identical nearest-neighbour terms. That is, # H= h[i,j] , (5.1) <i,j> where i and j are adjacent lattice sites. It should be noted that the translational invariance of the Hamiltonian may be spontaneously broken in the ground state. However, we will always assume that some degree of translational invariance remains in the ground state. As such, our PEPS representation of the ground state will be made up of a repeating 40 Projected Entangled Pair States pattern of tensors. 5.2.1 Imaginary-time Evolution Re-visited We again consider imaginary-time evolution as a means of finding an approximation to the ground state of a quantum Hamiltonian. In this discussion, we consider the square lattice, but it will become obvious that the technique is easily adapted to other lattice geometries. We will assume that our ground state is invariant under shifts of two sites. In effect, this means that our PEPS is formed by an alternating pattern of A and B tensors (see fig. 5.2i). We may say that the lattice L is composed of two interacting sub-lattices LA and LB . In this representation, there are only four unique links and we reference these by the direction they protrude from the tensor A - up, right, down and left (see fig. 5.2i). Thus, we can rewrite our Hamiltonian in terms of four non-commuting sums of terms, i.e. # H= h[r̃,r̃+ŷ] + h[r̃,r̃+x̂] + h[r̃,r̃−ŷ] + h[r̃,r̃−x̂] r̃∈LA = # (5.2) hr̃u + hr̃r + hr̃d + hr̃l r̃∈LA where x̂ and ŷ are lattice unit vectors in the x and y directions and u, r, d and l refer to up, right, down and left interactions. Taking the imaginary-time evolution operator corresponding to a time-step δτ , and then the first-order Suzuki-Trotter expansion, we obtain −Hδτ e − $ hr̃u δτ − $ hr̃r δτ − $ hr̃d δτ − $ hr̃l δτ =e & & & & r̃ r̃ r̃ r̃ = e−hu δτ e−hr δτ e−hd δτ e−hl δτ + O(δτ 2 ) r̃∈LA r̃∈LA r̃∈LA r̃∈LA r̃∈LA r̃∈LA r̃∈LA (5.3) r̃∈LA This closely resembles the iTEBD update in [Vid04], except that we have four distinct links instead of two. A key difference from the iTEBD algorithm is that for a PEPS, there is no known canonical form. In the iTEBD algorithm, the canonical form of the MPS was exploited to drastically simplify the update procedure. After applying the imaginary time-evolution gates, a good approximation to the resulting MPS could be reached by a local SVD and truncation of the MPS tensors. Since there is no canonical form for the PEPS, there is no guarantee that a PEPS determined from local split and truncation 5.3 The iPEPS Algorithm 41 Figure 5.2: (i) The infinite PEPS for a square lattice that is invariant under shifts of two lattice sites. (ii) The four links, up (u), right (r), down (d) and left (l) are distinguished based on which direction they protrude from tensor A. operations will be an efficient use of the classical resources. In section 5.3.1 we will consider such a ‘split and truncate’ algorithm, but for now, we consider a more systematic way of updating the tensors. 5.3 The iPEPS Algorithm An overview of the iPEPS algorithm is shown in figure 5.3. The combined action of the ( −hr̃ δτ gates labeled g in figure 5.3i represents the factor, e u in equation 5.3. r̃∈LA The first (a → b) and third (c → d) steps are justified by the same reasoning. Since the imaginary-time evolution operation is near-identity, we assume that the effect of the gate on the link on which it acts dominates the combined effect of the other gates on this link. This means that in order to update a given site, we can focus on a single link (see fig. 5.3b) and not worry about the secondary change in correlations introduced by other gates. Step two involves the determination of new tensors A! and B ! that are of maximal 42 Projected Entangled Pair States Figure 5.3: The four basic stages of the PEPS algorithm a) The imaginary-time evolution gate is applied to a sheet of corresponding links. b) Since the action of the gate is nearidentity in nature, we focus on a single link and assume that the changes to this link from other gates are negligible. c) We determine the new tensors A! and B ! (with Ddimensional interjoining bond) that best represent the action of the gate on the link. d) We enforce the change globally 5.3 The iPEPS Algorithm 43 bond-dimension D and best represent the action of the gate on A and B in some sense. Step three involves the replacement of A by A! and B by B ! globally. This represents the update with respect to the ‘up’ link direction. Repeating this for ‘right’, ‘down’ and ‘left’ links means we have evolved the system for some imaginary time, δτ . In step 2, we need to determine A! and B ! such that they maximise the closeness of (or minimise the distance between) |ψg # and |ψA" B " #. In the iTEBD algorithm the split and truncation operation effectively approximated such a requirement due to the canonical form of the MPS, but here we will need to do so in a more explicit manner. We choose to minimise the square error between the target state, |ψA" B " #, and the gate operation on the previous state, |ψg # ≡ g |ψ# . i.e. min ||ψA" B " # − |ψg #|2 ≡ min ($ψg | ψg # − $ψA" B " | ψg # − $ψg | ψA" B " # + $ψA" B " | ψA" B " #) " " " " A ,B A ,B (5.4) Each of the four terms on the right-hand side may be written as an infinite 2D tensor network. The first term is a constant and so may be ignored in the minimization problem. The remaining three terms depend on A! and B ! and define our minimization problem. Since the states |ψg # and |ψA" B " # are identical apart from the tensors sharing the link being updated, there is significant computational overlap in the calculation of these terms. We call the 2D tensor network surrounding the link tensors the environment (see fig. 5.4a). The environment captures the influence of correlations on the best choice of A! and B ! . In practice, we find an approximation to the environment by contracting the 2D tensor network into an effective six-tensor form (see fig. 5.4b). So, the main computational components of the iPEPS algorithm are, 1. Contraction of the periodic 2D tensor network into an environment surrounding a single link. 2. Determination of the new PEPS tensors in such a way that minimises the square error distance function in equation 5.4 For 1, we can employ either the infinite MPS or CTMRG procedures outlined in Chapter 4 and described in greater detail in Appendix A and B. For 2, we detail two approaches in Appendix C. The first is a variational update scheme that updates the matrices A! and B ! iteratively, at each step finding the A! or B ! that 44 Projected Entangled Pair States Figure 5.4: i) The tensor contraction defining the environment around a given link. ii) The six-tensor form approximating the environment. minimises the metric. This method is computationally simple and efficient, but is prone to becoming trapped in local minima. Furthermore, it uses the inverse of the environment to calculate the new tensors and as such small errors in the environment can become greatly magnified. As a result, the new tensors A! and B ! from this approach have been seen to introduce spurious correlations into the system. An alternative approach is based on the well-known conjugate gradient (CG) algorithm. Here, we use the gradient of the distance metric with respect to the PEPS tensor coefficients to guide us towards the minima. Importantly, both tensors A! and B ! are updated together, helping us to avoid local minima. Furthermore, the gradient is linear in the environment, and so errors in small environment spectral components are not amplified. 5.3.1 The Simplified Update Although there exists no canonical for for the PEPS, some authors [JWX08], inspired by the simplicity of the iTEBD algorithm, described the analogous local update for the PEPS. Here, on each site of the PEPS resides a tensor Γ with a physical index and bond indices and on each bond resides a diagonal weights tensor, λ. The imaginary-time evolution proceeds by applying the gate to the PEPS and making the update based only on local information. That is, we assume that the information normally encoded in the environment is stored in the λ tensors, and for the link being updated the new Γ and λ 5.3 The iPEPS Algorithm 45 tensors are determined in the same way as in the iTEBD algorithm - by contraction of the gate and two tensors, followed by SVD and truncation, recovering the interjoining bond. Whilst there is no formal reasoning to suggest such an approach is a near-optimal use of the PEPS coefficients, empirical evidence suggests it does a remarkable job of finding ground states in certain circumstances. In particular, since the long-range correlations encoded in the environment are not taken into account, this update favours states with short range correlated behaviour. On the other hand, in systems with diverging correlation length, such as those near a continuous phase transition, this scheme will struggle to capture the physical properties. We will see the differing fortunes of the simplified update in Chapters 7 and 9. The advantage of using such a method is the reduction in computational cost. Typically, the most expensive part of a PEPS imaginary-time evolution step is the contraction of the environment. This approach avoids this computational work during the time evolution, as here we only compute the environment when we need to obtain observable properties of the system. In a minimalist sense observables need only computed once in the final stage. Thus, by using the simplified update, we can access higher values of D, at the expense of limiting ourselves to states of a more local nature (for a given D). 5.3.2 Computational Complexity of the iPEPS algorithm On first analysis, it would appear that the leading order computational complexities of the infinite-MPS and CTMRG lattice contraction schemes scale identically as χ3 D6 + χ2 D6 d per step. Each lattice contraction requires many steps for the coefficients of the infinite-MPS or the CTM to converge. We label these number of steps Sϕ and SCT M RG respectively. In practice, it is seen that the infinite-MPS contraction scheme is slower than the CTMRG approach for equivalent χ and D, and this can be attributed to the presence of an iterative procedure within each step of the infinite-MPS algorithm (see Appendix A for details). We label the number of steps required in this procedure SLR . Though the per-step computational cost of this routine is sub-leading order, SLR typically takes on values between 10 and 40 for the values of χ and D we have considered in our studies, making its overall cost significant. Following the convergence of the infinite-MPS pair, two finite eigenvectors are converged in another iterative routine of Sv iterations, however this should be of secondary significance if a sensible limit is put on the value of Sv . The update of a single link has two stages. The initial out-of-loop preparatory contractions 46 Projected Entangled Pair States incur a one-off cost, scaling as χ3 D4 d2 +χ2 D6 d2 +χ2 D4 d4 +D2 d6 . This process is identical for both the variational and CG schemes. For the variational scheme, each of the SV U steps of the update loop scales as D5 d5 + D6 d3 . Within the CG update loop, we need to perform SCG ∝ D2 d2 steps each of which involves an iterative line minimization. Each line minimization scales as SLM D4 d6 , where SLM is the number of steps required to find the minimum. The simplified update can be performed by using an iterative sparse SVD routine, where each step scales as D2 d3 + Dd4 . We assume that SS ∝ D steps are required. In optimizing the tensors for such an update, we must perform some decompositions on the PEPS tensors, each of which scales as D5 d2 . These complexities are summarized in Table 5.1. As a rule of thumb, we estimate that χ ∝ D2 and D ≥ d. Under these assumptions, the critical path of the iPEPS algorithm is in the computation of the environment, an operation that scales as D12 . For this reason, when using the full environment at every iteration, we can generally only access D = 2, 3 and 4 on a standard quad-core desktop computer. For the simplified update, we can access D as high as 8, as the environment only needs to be computed once. 5.4 Computing Physical Properties of PEPS States Having computed a PEPS representation of the ground state, we wish to extract information about the properties of the state. If by changing some Hamiltonian parameter our ground state undergoes a phase transition, we wish to distinguish between the phases of the system and approximate where such a transition occurs. Traditionally, the theory of phase transitions has centred around Landau’s symmetry breaking theory[Lan37]. Landau proposed that a phase transition occurs when a symmetry is broken, and that as such the different phases could be distinguished by a local order parameter. Whilst modern physics has uncovered classes of phase transitions that depart from this picture [KT73, Lau83] it is apparent that for a large number of many-body Hamiltonians, local observables can tell us much about the ground state and the order of the phases in which the system can exist. Additionally, within the subset of symmetry-breaking QPTs, continuous phase transitions exhibit a diverging length scale at the transition point. This 5.4 Computing Physical Properties of PEPS States Scheme Variational update 47 Leading order cost of tensor update Leading order cost of computing environment χ2 D6 d2 + χ3 D4 d2 +χ2 D4 d4 +SV U (D5 d5 + D6 d3 ) Sϕ (χ3 D6 + χ2 D6 d +SLR (χ3 D4 )) + Sv (χ3 D4 + χ2 D6 d) iMPS Conjugate gradient update Variational update χ2 D6 d2 + χ3 D4 d2 +χ2 D4 d4 +SCG SLM D5 d5 χ2 D6 d2 + χ3 D4 d2 +χ2 D4 d4 +SV U (D5 d5 + D6 d3 ) SCT M RG (χ3 D6 + χ2 D6 d) CTMRG Conjugate gradient update Simplified update χ2 D6 d2 + χ3 D4 d2 +χ2 D4 d4 +SCG SLM D5 d5 2 3 4 SS (D d + Dd ) +D5 d2 One-time computation using iMPS or CTMRG algorithm (see above) Table 5.1: Leading order computational complexity of various iPEPS algorithms for the square lattice. Here, SV U denotes the number of iterations for the variational update algorithm (see Appendix C). SCG denotes the number of iterations for the conjugate gradient update algorithm and SLM denotes the number of iterations for the line minimization contained in each conjugate gradient iteration (see Appendix C). Sϕ denotes the number of iterations required for the infinite boundary MPS to converge (see Appendix A) and SLR denotes the number of iterations required to compute the left and right scalar product matrices per infinite MPS update (see Appendix A). Sv denotes the number of iterations required to converge the finite left and right eigenvectors per infinite MPS update (see Appendix A). SCT M RG denotes the number of iterations required to converge the corner transfer matrix (see Appendix B). SS denotes the number of iterations required in the sparse iterative SVD routine in the simplified update. 48 Projected Entangled Pair States means that spatial correlation functions decay polynomially, rather than exponentially as in a first-order QPT. Thus, computing spatial correlation functions sheds light on the order a given symmetry-breaking phase transition. In this section, we describe how these well-known quantities - and additional measures rooted in quantum information theory can be efficiently approximated for PEPS ground states. 5.4.1 Computation of Local Observables Consider the computation of the expectation value of a local observable M for an infinite, translationally invariant state, |ψ#, represented by a PEPS. By local observable, we mean that in our local basis it acts as the identity everywhere except for a single site, i. We show the tensor network representation for the computation of $ψ| M |ψ# in figure 5.5. In figure 5.5i, we show the three parts of the tensor network ‘sandwich’ comprising the calculation of $ψ| M |ψ#. In figure 5.5ii, we show that the quantity can be easily expressed as a 2D tensor network (with bond dimension D2 ). Recall that our 2D tensor network contraction algorithms work for translationally invariant networks. The presence of the observable M means that the translational invariance is disturbed, but importantly it is only disturbed in a local region. The environment around the site is comprised of identical tensors a and b and as for the iPEPS update we can approximately contract the surrounding region. In figure 5.5iii, we represent the approximate computation of $ψ| M |ψ#, with the contraction of the purple 6-tensor environment and the remaining PEPS and observable tensors. For computing the expectation value of an observable acting on an L × L block of sites, the computational complexity scales exponentially in L. 5.4.2 Spatial Correlation Functions A similar rationale can be applied to the computation of spatial correlation functions. We can contract 2D tensor networks without translational invariance, so long as the disturbance to the translational invariance is confined to some easily contractible finite region of the network. In figure 5.6, we show the computation of two-point spatial correlators, $θ1 θ2 # along a horizontal or diagonal lattice direction. Here, we contract the infinite, translationally invariant sections of the lattice via an infinite MPS technique. Then, the disturbance introduced by the operators θ1 and θ2 is confined to a 1-dimensional tract 5.4 Computing Physical Properties of PEPS States 49 Figure 5.5: i) The computation of a local observable as a tensor network contraction. Here the red tensor at site i is the local observable operator. ii) Contraction along the physical indices leaves a 2D tensor network. iii) The observable is approximated by approximating the environment by the means described in Chapter 4 and Appenidix A and B. 50 Projected Entangled Pair States of the network, which is easily efficiently contracted. So whilst we may not be able to compute, for example, arbitrary three-point correlation functions, we can compute certain two-point correlation functions quite easily. For an MPS, it is apparent that the correlation function will decay exponentially in separation distance. The reason for this is that the number of exponentially decaying terms in the computation of the correlation function is capped by χ. For an infinite PEPS, there is an infinite number of paths between two lattice points. Thus, our correlation function is the sum of an infinite number of exponentially decaying terms. It is therefore possible to represent states with polynomially decaying correlation functions with a PEPS, such as quantum critical ground states. However, since we use an MPS with finite χ to contract the PEPS lattice, we will never in practice be able to reproduce a polynomially decaying correlation function with our approach. That said, we can still draw conclusions from studying the comparative rate of exponential decay of two ground states. Furthermore, the rate of polynomial decay of a critical system can be estimated by computing the correlator for various values of D and χ and then observing the asymptotic behaviour of the correlator. 5.4.3 Fidelity Measures It was suggested in [ZB07, ZPac06] that certain fidelity measures are useful in the study quantum phase transitions in one and two dimensions. That is, the fidelity captures information about the macroscopic similarity of two ground states corresponding to different values of some Hamiltonian parameter. Recall that traditionally, many quantum phase transitions are detected by a local order parameter, an observable that has value zero in one phase and non-zero in another. The choice of order parameter, a priori, is not always obvious. Fidelity measures on the other hand can suggest if two ground states are in the same phase without reference to any kind of order parameter. Sampling the fidelity between ground states of the phase diagram and creating a so-called fidelity diagram, one can judge whether the QPT is first or higher-order in nature. We define the fidelity in the standard sense, F (λ, λ! ) = |$ψ0 (λ! ) | ψ0 (λ)#| (5.5) where |ψ0 (λ! )# and |ψ0 (λ)# are ground states of Hamiltonians parameterized by the coefficients λ! and λ respectively. 5.4 Computing Physical Properties of PEPS States 51 Figure 5.6: The calculation of spatial correlation functions with PEPS. (above) Correlation functions along a horizontal (or vertical) direction can be computed by evolving infinite-MPS boundary states from above and below. Then, the remaining horizontal tract of tensors can be efficiently contracted. (below) Likewise, correlation functions along a diagonal lattice direction can be computed by employing infinite-MPS boundary states in a diagonal orientation. 52 Projected Entangled Pair States Figure 5.7: The expression of the fidelity as a 2D tensor network. Here, the states |ψ0 (λ)# and |ψ0 (λ! )# are translationally invariant and contain tensors A,B and C,D respectively. 5.4 Computing Physical Properties of PEPS States 53 If two states differ only by a global phase, then F (λ, λ! ) = 1. On the other hand, it is quite possible that for different states, the fidelity F (λ, λ! ) decays exponentially in the system size. For infinite systems, many different states have a fidelity of zero, regardless of how close they are in a local sense. For large many-body systems, it was proposed in [ZB07, ZPac06] that the rate at which the fidelity between two different ground states tends to zero can effectively express how close two states are in phase space. Such an intensive quantity is defined by considering that the fidelity scales as L F (λ, λ! ) = [d (λ, λ! )] , (5.6) where L is the number of sites. The term d is known as the fidelity per lattice site. Taking the natural logarithm of both sides, we obtain, d (λ, λ! ) = ln (F (λ, λ! )) L (5.7) It was shown in [ZOV08] that the computation of d for infinite (L → ∞) lattice systems is realized by capturing eigenvalues arising during the contraction of 2D networks like that shown in figure 5.7ii. 5.4.4 Computation of Entanglement Measures We can also compute some of the elementary entanglement measures described in Chapter 3 with a PEPS. Take for instance the entanglement entropy of an adjacent pair of sites. To compute this, we need to firstly compute the reduced density matrix for the sitepair. This involves a trivial modification to the steps in the computation of an observable (see figure 5.5), leaving the physical indices of the site-pair open instead of contracting them with the observable. The approximate reduced density matrix is now represented by the contraction of the network in figure 5.8. From this reduced density matrix, it is simple to compute the entanglement entropy. However, the computational complexity of computing the reduced density matrix for a contiguous block of sites scales exponentially in the number of sites, and so it is generally only possible to compute the entanglement entropy of relatively small blocks of sites. The Bloch vector was related to the reduced density matrix in equation 3.7. There, it was stated that there was a precise mapping between the reduced density matrix for a single site and the Bloch vector, and that the Bloch vector could be determined from a set of local observables: r̃ = $σx # î + $σy # ĵ + $σz # k̂ (5.8) 54 Projected Entangled Pair States Figure 5.8: The tensor contraction that gives the four-legged tensor containing the coefficients of the two-site reduced density matrix. The magnitude of r̃, or purity, is thus computable from the expectation value of the spin-1/2 Pauli operators. The geometric entanglement of a PEPS state, as defined in equation 3.11, is the maximal fidelity per lattice site between the PEPS state and a product state. In our case, we have a PEPS representation of the ground state that is translationally periodic by some integersite shift in a vertical or horizontal direction. Since translational invariance undergirds our ability to contract infinite 2D tensor networks and compute the fidelity per lattice site, the domain of our fidelity maximisation will be restricted to the set of product states that are themselves translationally invariant. For algorithmic simplicity, we choose |φ# to be a D = 1 PEPS with the same periodicity as |ψ#. A variational algorithm for computing the geometric entanglement is described in more detail in Chapter 11. 5.5 Concluding Remarks In this chapter, we have described the iPEPS algorithm - a technique approximately computing the ground state of local 2D Hamiltonians. For a low-level treatment of certain important stages of the algorithm, we refer the reader to Appendices A, B and C. We have also described some important ways in which we can manipulate the PEPS to extract physical information about the state. In the following chapters, we apply the iPEPS algorithm to lattice models, firstly benchmarking our algorithm against analytical results and models well studied by existing numerical schemes, and then applying our algorithm 5.5 Concluding Remarks to problems beyond the reach of other algorithms. 55 56 Projected Entangled Pair States Chapter 6 The 2D Classical Ising Model 6.1 Introduction The Ising model is the most well-known example of a spin system for which rich macroscopic behaviour results from a simple microscopic description. The Hamiltonian describes a nearest neighbour spin interaction where each spin is confined to two values, nominally called up and down. The interaction may be ferromagnetic or antiferromagnetic in nature, depending on whether the system energetically favours neighbouring spins pairing in the same or opposite directions. On the square lattice, this distinction is largely superficial, as the essential statistical physics of the two systems is identical up to a sign. As a result, in this chapter we confine our discussion to the ferromagnetic case. At zero temperature the classical solution is trivial - the system spontaneously breaks the spin-up/spin-down (Z2 ) symmetry and lies in a state in which all of the spins align in either the up or down direction. As temperature is increased, the Boltzmann weight of states with flipped spins increases and the partition function contains contributions from a greater number of classical states. At infinite temperature, the Boltzmann weights of all possible classical spin states are equal and the Z2 symmetry evident in the Hamiltonian is restored. The key question of Ising’s original thesis [Isi25] was whether the system exhibited a classical phase transition. That is, was there a finite temperature point in the phase diagram where, in the thermodynamic limit, the partition function of the system switched from exhibiting magnetic order (spontaneously broken Z2 symmetry) to disorder (symmetry restored). He determined that in one dimension there is no such transition and incorrectly assumed that this would hold for two and higher dimensions as well. In two dimensions, Onsager famously determined that a phase transition does in fact occur and obtained analytical 57 58 The 2D Classical Ising Model expressions for properties such as the magnetization of the state at a given√temperature log(1+ 2) ≈ 0.4407 [Ons44]. The order-disorder transition at inverse temperature β = 2 was seen to be a second order phase transition. In particular, this meant that spatial correlation functions of the classical statistical ensemble decayed polynomially with site separation. In Chapter 4 we explained that determining the statistical properties of 2D classical systems at finite temperature amounted to contracting an infinite 2D tensor network. There, we also described some approximate schemes for contracting infinite, translationally invariant 2D tensor networks. This operation was also seen to be key to computing the environment in the iPEPS algorithm. In this chapter (see also [OV08]) we show in detail how we can encode the partition function of the classical Ising model in a 2D tensor network, and describe how the basis structure can be altered to compute local statistical properties. By computing such properties and comparing with Onsager’s exact solution, we can benchmark our tensor network contraction algorithm. In doing so, we will not only be able to validate an important part of the iPEPS algorithm, but also demonstrate some basic notions of tensor network representations of strongly correlated states. 6.2 The Model We introduce the classical (ferromagnetic) Ising model Hamiltonian # H2DC = − Si Sj , (6.1) <i,j> where i and j represent adjacent lattice sites. The Hamiltonian is Z2 symmetric, as it is invariant under a transformation that flips all of the spins, i.e. Sk → −Sk , ∀k. At inverse temperature β = 1/T, the partition function, is defined in the usual sense # Z= e−βH(σ) , (6.2) σ where each σ = {S1 , S2 , S3 , ...., SN } is a unique spin configuration of the system. As mentioned, our intention is to recast the evaluation of the partition function as the contraction of a 2D tensor network. This is done by firstly finding the matrices, ' % eβ e−β T = (6.3) e−β eβ 6.3 Results 59 that when multiplied (contracted) in a certain way compute the partition function. The arrangement of T matrices is shown in fig. 6.3i). In the spirit of Onsager’s solution, an entire row (or column) of the T matrices forms a transfer matrix. Next, we split each T by SVD (fig. 6.3ii) and reform the tensors at the lattice sites (fig. 6.3iii). This creates an infinite square tensor network which when contracted computes the partition function of the 2D classical Ising model. To thoroughly justify the connection to the iPEPS algorithm, one can completely reproduce the PEPS structure, as described in [VWPGC06]. The statistical properties and spatial correlation function for the classical system can be calculated using the very same algorithms that we developed for quantum states represented by a PEPS. This simple and seemingly benign procedure is actually of tremendous importance. It shows that a state with polynomially decaying correlation functions (i.e. a critical ’quantum’ state) can be exactly encoded in an infinite PEPS with bond dimension D = 2. 6.3 Results The calculation of local expectation values can be performed in either the straightforward 2D tensor network representation or the iPEPS representation of the system. As an example, we may wish to compute the average spin at a given site, i. $ Si (σ)e−βH(σ) σ $Si # = (6.4) Z This quantity is similar to the partition function, except that microstates with a down spin at site i acquire a negative sign in the summation. In the straightforward approach, such a computation amounts to replacing the A tensor in fig. 6.3iii) at site i with a modified tensor A0 . In the PEPS approach, we simply insert the σZ Pauli operator along the physical indices at site i, just as we would to compute a quantum observable. Magnetization A plot of the magnetization against the inverse temperature β is shown in figure 6.2. The exact solution is also plotted. Here, the only varying parameter is the value of χ used for the boundary state. For each χ, the solid line represents a result computed with the CTMRG method, and the dotted line a result computed with the infinite MPS approach. There are two conclusions to draw from this plot. 60 The 2D Classical Ising Model Figure 6.1: The process for expressing the partition function as a 2D square tensor network. i) The partition function in terms of interaction matrices, T . ii) The splitting of the T matrices by singular value decomposition iii) The recombination of the split matrices into the tensor A, creating a translationally invariant, square 2D network. 6.3 Results 61 Firstly, as χ increases, the PEPS magnetization generally becomes closer to the analytic result. This is not universally true - in our plot it can be seen that the infinite-MPS result for χ = 20 is actually inferior to that for χ = 16. This is most likely explained by powerof-two values of χ retaining some symmetry in the boundary state, and is usually only observed for small values of χ. Secondly, for the same χ, one can see that CTMRG gives better magnetization results. This may suggest that the CTMRG more efficiently stores correlations of an infinite network. One possible reason for such a result is that at each step of the infinite-MPS boundary state evolution, we keep information that holds correlations between infinite halves of the network. Furthermore, say the MPS is horizontal - then horizontal and vertical correlations are treated quite differently. On the other hand, our CTMRG algorithm (Appendix B) proceeds by adding sites to the environment one at a time and renormalizing to retain a finite set of correlations. The local correlations, those closest in a radial sense to the unit-cell, are strongest and favoured to be retained. This may be why the CTMRG performs better in the computation of local observable quantities. A second parameter of interest in contracting infinite systems is the number of iterations until convergence of the boundary state. In figure 6.2 we used 10,000 iterations to converge the boundary state. In figure 6.3 we show a plot of the magnetization for constant χ = 48 and boundary iterations from 100 up to 10,000. One can see a marked change in the magnetization error profile as the number of iterations increases. Even at 10,000 iterations, it can be seen that the computed magnetization is still some distance from the exact magnetization near the critical point. This reflects the great difficulty in simulating critical systems with tensor networks. As the system becomes more strongly correlated, in order to accurately describe the physics there is a simultaneous requirement for both increased tensor bond dimension and an increased number of algorithm iterations. From the above results, it can be seen that the classical Ising model provides a very useful toolbox for testing the contraction of an infinite 2D tensor network. When contracting an infinite 2D network, we are faced with a trade-off between accuracy and computational time - larger χ and more boundary evolution steps means more accurate results at the cost of computational cycles. 62 The 2D Classical Ising Model 0.7 Magnetization 0.6 0.5 0.4 exact χ = 10 χ = 16 χ = 20 χ = 32 χ = 48 0.3 0.2 0.1 0 0.44 0.4402 0.4404 0.4406 β 0.4408 0.441 0.4412 Magnetization Error 0.5 χ = 10 χ = 16 χ = 20 χ = 32 χ = 48 0.4 0.3 0.2 0.1 0 0.44 0.4402 0.4404 0.4406 β 0.4408 0.441 0.4412 Figure 6.2: A plot of the magnetization and magnetization error (below) of the 2D classical Ising model for various χ, along with the exact solution. The dotted plots indicate that the infinite MPS boundary state technique was used. The solid results are derived from CTMRG. Note that i) large χ generally results in a smaller magnetization error. ii) CTMRG outperforms the iMPS approach. Two-point correlation function We compute the two-point correlation function $ Sx,y Sx+i,y (σ) σ $Sx,y Sx+i,y # = Z (6.5) as a function of the horizontal site separation |i|. At the critical temperature, it is known that this correlation function should decay polynomially, with exponent η = 1/4. The plot in figure 6.4 shows the plot of the two-point correlator for various values of χ, along with the exact correlator. Once again, it can be seen that the correlation function more closely approximates the exact solution as χ increases. For χ = 60, the correlator is almost indistinguishable from the exact result for separations of over 1000 sites. Importantly however, we cannot fully reproduce the polynomially decaying correlation function with finite χ. This confirms a key limitation 6.3 Results 63 0.7 0.6 Magnetization 0.5 0.4 0.3 exact N = 100 N = 500 N = 1000 N = 5000 N = 10000 0.2 0.1 0 0.44 0.4402 0.4404 0.4406 β 0.4408 0.441 0.4412 Figure 6.3: A plot of the magnetization of the 2D classical Ising model for various numbers of boundary state iterations. Note that as the number of iterations increases, we more closely track the exact solution. of the iPEPS algorithm described in Chapter 5. Here, we have an exact representation for a critical correlation function, but by computing it using an infinite-MPS with fixed χ, we can only capture exponentially decaying correlations. 64 The 2D Classical Ising Model 0 10 −1 10 −2 C(x) 10 −3 10 −4 exact χ =10 χ = 20 χ = 40 χ = 60 10 −5 10 −6 10 0 10 1 10 2 10 Separation, x = |i − j| 3 10 4 10 Figure 6.4: A plot showing the two-point correlation function of the classical Ising model for various χ along with the exact solution. Note that as χ increases, we obtain results that better approximate the polynomially decaying exact solution. Also note that though all of the results here are exponentially decaying in the infinite limit, as we increase χ we see a larger window of approximately polynomial decay. Chapter 7 The 2D Quantum Ising Model 7.1 Introduction In this chapter, we study the quantum Ising model on the square lattice, a simple example of a non-trivial model of quantum magnetism. The quantum Ising Hamiltonian takes the form, # # H=− σzi σzj − λx σxi (7.1) <i,j> i where σz and σx are Pauli operators. The system here is tunable by an external transverse magnetic field, λx . We refer to the eigenstates of σz as |+#z and |−#z . The Hamiltonian is Z2 symmetric as the energy is invariant under a flip of every spin in the system, i.e. |+#z → |−#z , |−#z → |+#z . Equivalently, the Hamiltonian is invariant under the substitution σz → −σz . In the Landau theory of phase transitions [Lan37], the phases are distinguished by whether the symmetry of the Hamiltonian is conserved or broken in the ground state. Since the Hamiltonian is invariant under the substitution σz → −σz , a symmetric ground state, |Ψsym #, should be marked by having zero magnetization in the z-direction, i.e. $Ψsym | σz |Ψsym # = $Ψsym | − σz |Ψsym # = − $Ψsym | σz |Ψsym # =0 By contrast, a state with broken symmetry will have non-zero magnetization in the zdirection. The longitudinal magnetization mz (λx ) = $Ψλx |σz |Ψλx # is therefore an order parameter of the system. This magnetization can be positive or negative in value, depending on whether the average spin is up or down. When the magnetization is positive, 65 66 The 2D Quantum Ising Model we may say that the ground state lies in a spin up region of phase space. Conversely, when the magnetization is negative, we say the ground state lies in a spin-down region of phase space. At λx = 0, the ground is in a product state configuration, where either every spin is the |+#z state or every spin is in the |−#z state. The system exhibits a phenomenon known as spontaneous symmetry breaking, where it randomly chooses either the spin up or spin down configuration. In the limit of infinite λx , the state is in the |+#x eigenstate of σx , and the Z2 symmetry is restored. The task is to determine at what value of λx the ground state changes between these types of behaviour, and to describe the phase diagram of the system. Before proceeding, it should be noted that the transverse magnetic field λx plays an analogous role to temperature in the classical Ising model, in that it encourages disorder in the σz -basis of the state. Indeed, the quantum Ising Hamiltonian in D spatial dimensions is derivable from the classical Ising Hamiltonian in (D + 1) spatial dimensions, by treating one of the spatial dimensions in the classical problem as representing imaginary-time (see e.g. [FS78]). This makes concrete the quantum-classical correspondence of Chapter 2, the result being that certain (universal ) properties of the 2D quantum Ising model map exactly to certain (universal ) properties of the 3D classical Ising model. Although there exists no exact solution for the quantum Ising model in two spatial dimensions, it has been well-studied for finite systems by various numerical techniques such as quantum Monte Carlo (QMC) [BD02], series expansion (SE) [HHO90] and exact diagonalization (ED). Since the QMC treatment of the quantum Ising model does not suffer from the sign problem, finite size scaling of QMC simulations produces numerical data thought to be of an extremely high accuracy. From these, it has been estimated that the phase transition occurs at λx ≈ 3.044. Moreover, it is strongly suggested that the phase transition is of second order (as in the 1D quantum Ising model) with a critical exponent of β ≈ 0.327 [BD02]. Our intention here is to benchmark the iPEPS algorithm against these results, to establish that the iPEPS algorithm can be used to effectively study the phase diagram of a 2D quantum system. 7.2 Results 7.2 67 Results We perform the imaginary-time evolution for this model with the algorithm described in Chapter 5. As a first attempt, we use the infinite-MPS method to contract the boundary state. Having obtained the ground state PEPS representation for values of λx between 0 and 5, we plot the energy-per-link, el = 1# [, r ,, r +,k] $Ψλx | hl |Ψλx #, /r ∈ LA , /k = {x̂, −x̂, ŷ, −ŷ} 4 (7.2) ,k where LA is the sub-lattice defined as in section 5.2.1, and [,i,,j] hl , , = σzi σzj + λx ,i λx ,j σ + σx , 4 x 4 (7.3) and the transverse magnetization, mx (λx ) = $Ψλx |σx |Ψλx #, (7.4) in fig. 7.1. Here, we have compared the plotted PEPS points against series expansion results, obtaining a very good agreement between the two. One can see that the first derivative of the energy appears continuous, suggesting that the QPT is a continuous phase transition. The discontinuity in the first derivative of mx suggests there could be a phase transition at this point. In fig. 7.2, we plot the longitudinal magnetization, mz (λx ) = $Ψλx |σz |Ψλx #, (7.5) against λx . We say that the phase with non-zero longitudinal magnetization is ordered and the phase with zero longitudinal magnetization disordered. The inset shows clearly that the D = 3 solution predicts a magnetization closer to the QMC result than the D = 2 solution. This is in-line with D = 3 producing a lower energy approximation to the ground state. Critical exponents describe how certain physical quantities behave near a phase transition. They are universal quantities in the sense that many critical systems with non-trivial differences in their microscopic description have identical critical exponents. We say that such critical systems are in the same universality class and that the long-range behaviour of systems in the same universality class is identical. The critical exponents depend on 68 The 2D Quantum Ising Model Figure 7.1: Transverse magnetization mx and energy per site e of the quantum Ising model as a function of the transverse magnetic field h. The continuous line shows series expansion results (to 26th and 16th order in perturbation theory) for h smaller and larger than hc ≈ 3.044 [HHO90]. Increasing D leads to a lower energy per site e. For instance, at h = 3.1, e(D = 2) ≈ −1.6417 and e(D = 3) ≈ −1.6423. properties such as the spatial dimension of the system and lattice geometry, and the spin dimension, Hamiltonian symmetries and the range of interactions. For the Ising model, the order parameter, mz scales near the critical point as, mz (λx ) = mx0 |λx − λx,crit |β (7.6) Here, the parameter β is the critical exponent. With PEPS, we obtain an estimate of β = 0.328, which is within 2%, of the QMC estimate. The results so far have been determined using the infinite-MPS contraction scheme to determine the environment. We now wish to investigate the effect of using the corner transfer matrix to compute the environment for the PEPS update, or alternatively using the simplified update scheme in which the link updates proceed without determination of any environment tensors. For the classical Ising model, it was seen that the use of CTMRG resulted in smaller error in the magnetization (for the same χ). This suggested that the CTM better represents the correlations between the environment and the unitcell. On the other hand, it is believed that the computationally inexpensive simplified update should perform well only in regions where the correlations are very local in nature. 7.2 Results 69 Figure 7.2: Magnetization mz (λ) of the quantum Ising model as a function of the transverse magnetic field λ. Dashed lines are a guide to the eye. We have used the diagonal scheme for (D, χ) = (2, 20), (3, 25) and (4, 35) (the vertical/horizontal scheme leads to comparable results with slightly smaller χ.) The inset shows a log plot of mz versus |λ − λc |, including our estimate of λc and β. The continuous line shows the linear fit. Near the critical point, the simplified scheme is expected to struggle to capture the correct physics due to the diverging correlation length of the system. We want to see how the iMPS, CTMRG and simplified update variants of the algorithm perform for computing the order parameter and critical exponent of the quantum Ising model. The results are shown in figure 7.3 and quite resoundingly affirm the idea that the CTM stores correlations more efficiently for the computation of local properties. Here, the CTM with D = 2 predicts a critical point of λc = 3.08 and a D = 3 CTM predicts it as λc = 3.04. Meanwhile, it can be seen that the simplified update scheme performs quite well far from the critical point, but differs significantly near where QMC and the other methods suggest a phase transition. 70 The 2D Quantum Ising Model 0.8 0.7 0.6 z <m > 0.5 0.4 D = 2 iMPS D = 3 iMPS D = 2 CTMRG D = 3 CTMRG D = 3 simplified D = 5 simplified MFT 0.3 0.2 0.1 0 2.6 2.7 2.8 2.9 3 3.1 3.2 3.3 λ x Figure 7.3: A comparison of the order parameter for iMPS, CTMRG and the simplified update. λc β QMC D=2 D=3 D=2 D=3 D=3 Ref. [BD02] iMPS iMPS CTMRG CTMRG VDMA Ref. [NOH+ 00, NHO+ 01, GMN03] 3.044 0.327 3.10 0.346 3.06 0.332 3.08 0.333 3.04 0.328 3.2 – Table 7.1: Critical point and exponent β as a function of D. Figure 7.4: Two-point correlator Sxx (l) of the quantum Ising model near the critical point, λ = 3.05. For nearest neighbors, the correlator quickly converges as a function of D, whereas for long distances we expect to see convergence for larger values of D. 7.2 Results 71 Figure 7.5: Fidelity diagram of the quantum Ising model, computed from a catalogue of D = 2 ground states 7.2.1 Two-Point Correlation Functions [r̃] [r̃+ĩ] Figure 7.4 plots the two-point correlation function Sxx = $ψ| σx σx |ψ#, where ĩ is some displacement in a horizontal direction. The states used correspond to a magnetic field near criticality, λx = 3.05. On this plot, it is clear that both correlations decay exponentially (a polynomial decay would appear linear on a log-log plot). Furthermore, the D = 2 correlation decays at a rate faster that the D = 3 plot. However, the D = 3 plot shows a greater tendency towards polynomial decay, more evidence that as D increases, the qualities of the ground state more closely match the physical properties. 7.2.2 Fidelity plot The fidelity plot for the 2D quantum Ising model is shown in fig. 7.5. Our results here are determined from PEPS ground states with bond dimension D = 2. Such a plot was also presented in [ZOV08]. There, it was suggested that the characteristic ”pinch-point” of the surface near the the point (λ1x , λ2x ) = (λc , λc ) is indicative of a continuous phase transition. We will see a quite different picture for a first-order phase transition in Chapter 9. 72 The 2D Quantum Ising Model Chapter 8 The Hard-Core Bose-Hubbard Model 8.1 Introduction The physics of interacting bosons at low temperature has since long attracted considerable interest due to the occurrence of Bose-Einstein condensation [DGPS99]. The BoseHubbard model, a simplified microscopic description of an interacting boson gas in a lattice potential, is commonly used to study related phenomena, such as the superfluidto-insulator transitions in liquid helium [FWGF89] or the onset of superconductivity in granular superconductors [JHOG89] and arrays of Josephson junctions [BD84]. In more recent years, the Bose-Hubbard model is also employed to describe experiments with cold atoms trapped in optical lattices [JBC+ 98, GBM+ 01, GME+ 02]. In this chapter we initiate the exploration of interacting bosons in an infinite 2D lattice with tensor network algorithms. We use the iPEPS algorithm explained in Chapter 5 and [JOV+ 08] to characterize the ground state of the hard-core Bose-Hubbard (HCBH) model, namely the Bose Hubbard model in the hard-core limit, where either zero or one bosons are allowed on each lattice site. Although no analytical solution is known for the 2D HCBH model, there is already a wealth of numerical results based on mean-field theory, spin-wave corrections and stochastic series expansion [BBM+ 02]. These techniques have been quite successful in determining some of the properties of the ground state of the 2D HCBH model, such as its energy, particle density or condensate fraction. Our goal in this chapter is twofold. Firstly, by comparing our results against those of Ref. [BBM+ 02], we aim to benchmark the performance of the iPEPS algorithm in the HCBH model. Secondly, once the validity of the iPEPS algorithm for this model has been established, we use it 73 74 The Hard-Core Bose-Hubbard Model to obtain results that are harder to compute with (or simply well beyond the reach of) the other approaches. These include the analysis of entanglement, two-point correlators, fidelities between different ground states[ZPac06, ZB07, ZOV08], and the simulation of time evolution. We note that the present results naturally complement those of Ref. [MVC07] for finite systems, where the PEPS algorithm [VC04] was used to study the HCBH model in a lattice made of at most 11 × 11 sites. 8.2 Model The Bose-Hubbard model [FWGF89] with on-site and nearest neighbour repulsion is described by the Hamiltonian * #) † # HBH = − J ai aj + a†j ai − µn̂i i (i,j) + # i V1 n̂i (n̂i − 1) + V2 # n̂i n̂j , (i,j) where a†i , ai are the usual bosonic creation and annihilation operators, n̂i = ρ̂i ≡ a†i ai is the number (density) operator at site i, J is the hopping strength, µ is the chemical potential, and V1 , V2 ≥ 0. The four terms in the above equation describe, respectively, the hopping of bosonic particles between adjacent sites (J), a single-site chemical potential (µ), an on-site repulsive interaction (V1 ) and an adjacent site repulsive interaction (V2 ). Here we shall restrict our attention to on-site repulsion only (V2 = 0) and to the socalled hard-core limit in which this on-site repulsion dominates (V1 → ∞). Under these conditions the local Hilbert space at every site describes the presence or absence of a single boson and has dimension 2. With the hard-core constraint in place, the Hamiltonian becomes * # #) † † HHC = −J ai aj + aj ai − µn̂i , (8.1) (i,j) i where a†i , ai are now hard-core bosonic operators obeying the commutation relation, + , † ai , aj = (1 − 2n̂i ) δij . A few well-known facts of the Hard Core Bose Hubbard (HCBH) model are: 8.2 Model 75 (i) U(1) symmetry.— The HCBH model inherits particle number conservation from the Bose-Hubbard model, # [HHC , N̂ ] = 0, N̂ ≡ n̂l , (8.2) l and it thus has a U (1) symmetry, corresponding to transforming each site l by eiφn̂l , φ ∈ [0, 2π). (ii) Duality transformation.— In addition, the transformation al → a†l applied on all sites l of the lattice maps HHC (µ) into HHC (−µ) (up to an irrelevant additive constant). Accordingly, the model is self-dual at µ = 0, and results for, say, µ > 0 can be easily obtained from those for µ < 0. (iii) Equivalence with a spin model.— The HCBH model is equivalent to a quantum spin 1 model, namely the ferromagnetic quantum XX model, 2 HXX = − J# x x µ# z σ , σi σj + σiy σjy + 2 2 i i (8.3) (i,j) which is obtained from HHC with the replacements al = σlx + iσly , 2 a†l = σlx − iσly , 2 where σx , σy and σz are the spin 21 Pauli matrices. In particular, all the results of this paper also apply, after a proper translation, to the ferromagnetic quantum XX model on an infinite square lattice. (iv) Ground-state phase diagram.— The hopping term in HHC favors delocalization of individual bosons in the ground state, whereas the chemical potential term determines the ground state bosonic density ρ, ρ≡ 1 # † $ai ai # . N i For µ negative, a sufficiently large value of |µ| forces the lattice to be completely empty, ρ = 0. Similarly, a large value of (positive) µ forces the lattice to be completely full, ρ = 1, as expected from the duality of the model. In both cases there is a gap in the energy spectrum and the system represents a Mott insulator. When, instead, the kinetic term dominates, the density has some intermediate value 0 < ρ < 1, the cost of adding/removing bosons to the system vanishes, and the system is in a superfluid 76 The Hard-Core Bose-Hubbard Model phase [FWGF89]. The latter is characterized by a finite fraction of bosons in the lowest $ momentum mode ãk=0 ≡ (1/N ) i ai , that is by a non-vanishing condensate fraction ρ0 , 1 # † $a ai # . ρ0 ≡ $ã†k=0 ãk=0 # = 2 N i,j j In the thermodynamic limit, N → ∞, a non-vanishing condensate fraction is only possible in the presence of off-diagonal long range order (ODLRO) [Yan62], or $a†j ai # = ) 0 in the limit of large distances |i − j|, given that ρ0 = lim $a†j ai #. |i−j|→∞ (8.4) (v) Quantum phase transition.— Between the Mott insulator and superfluid phases, there is a continuous quantum phase transition [FWGF89], tuned by Jµ . 8.3 Results In this section we present the numerical results obtained with the iPEPS algorithm. Without loss of generality, we fix the hopping strength J = 1 and compute an approximation to the ground state |ΨGS # of HHC for different values of the chemical potential µ. Then we use the resulting PEPS/TPS to extract the expectation value of local observables, analyze ground state entanglement, compute two-point correlators and fidelities, or as the starting point for an evolution in real time. In most cases we only report results for µ ≤ 0 (equivalently, density 0 ≤ ρ ≤ 0.5) since due to the duality of the model, results for positive µ (equivalently, 0.5 ≤ ρ ≤ 1) can be obtained from those for negative µ. 8.3.1 Local observables and phase diagram Particle density ρ.— Fig. 8.1 shows the density ρ as a function of the chemical potential µ in the interval −4 ≤ µ ≤ 0. Notice that ρ = 0 for µ ≤ −4, since each single site is vacant. Our results are in remarkable agreement with those obtained in Ref. [BBM+ 02] with stochastic series expansions (SSE) for a finite lattice made of 32 × 32 and with a mean field calculations plus spin wave corrections (SW). We note that the curves ρ(µ) for D = 2 and D = 3 are very similar. 8.3 Results 77 Energy per site (.— Fig. 8.1 also shows the energy per site ( as a function of the density ρ. This is obtained by computing ((µ) and then replacing the dependence on µ with ρ by inverting the curve ρ(µ) discussed above. Again, our results for ((ρ) are in remarkable agreement with those obtained in Ref. [BBM+ 02] with stochastic series expansions (SSE) for a finite lattice made of 32 × 32. They are also very similar to the results coming from mean field calculations with spin wave corrections (SW) of Ref [BBM+ 02], and for small densities reproduce the scaling (valid only in the regime of a very dilute gas) predicted in Ref. [Sch71, HFM78] by using field theory methods based on a summation of ladder diagrams. Once more, the curves ((ρ) obtained with bond dimension D = 2 and D = 3 are very similar, although D = 3 produces slightly lower energies. Condensate fraction ρ0 .— In order to compute the condensate fraction ρ0 , we exploit that the iPEPS algorithm induces a spontaneous symmetry breaking of particle number conservation. Indeed, one of the effects of having a finite bond dimension D is that the PEPS/TPS that minimizes the energy does not have a well-defined particle number. As a result, instead of having $ai # = 0, we obtain a non-vanishing value $ai # = ) 0 such that ρ0 = lim $a†j ai # = |$ai #|2 . |i−j|→∞ (8.5) In other words, the ODLRO associated with the presence of superfluidity, or a finite condensate fraction, can be computed by analysing the expectation value of al , $al # = √ ρ0 eiϕ , (8.6) where the phase ϕ is constant over the whole system but is otherwise arbitrary. The condensate fraction ρ0 shows that the model is in an insulating phase for |µ| ≥ 4 (ρ = 0, 1) and in a superfluid phase for −4 < µ < 4 (0 < ρ < 1), with a continuous quantum phase transition occurring at |µ| = −4, as expected. However, this time the curves ρ0 (ρ) obtained with D = 2 and D = 3 are noticeably different, with D = 3 results again in remarkable agreement with the SSE and SW results of Ref. [BBM+ 02]. 8.3.2 Entanglement The iPEPS algorithm is based on assuming that a PEPS/TPS offers a good description of the state |Ψ# of the system. Results for small D will only be reliable if |Ψ# has at most a moderate amount of entanglement. Thus, in order to understand in which regime the iPEPS algorithm should be expected to provide reliable results, it is worth studying how entangled the ground state |ΨGS # is as a function of µ. 78 The Hard-Core Bose-Hubbard Model SSE MF SW Ladder Andersen D=2 D=3 ρ 0.4 0.2 0 -4 -3 -2 µ -1 0 0 -0.2 ε -0.4 -0.6 -0.8 -1 -1.2 0 0.2 0 0.2 ρ 0.4 0.25 ρ0 0.2 0.15 0.1 0.05 0 ρ 0.4 Figure 8.1: Particle density ρ(µ), energy per lattice site ((ρ) and condensate fraction ρ0 (ρ) for a PEPS/TPS with D = 2, 3. We have also plotted results from Ref.[BBM+ 02] corresponding to several other techniques. Our results follow closely those obtained with stochastic series expansion (SSE) and mean field with spin wave corrections (SW). 8.3 Results 79 1 0.4 D=2 D=3 0.98 L=1 L=2 L=4 0.35 0.3 0.25 SL r 0.96 0.94 0.2 0.15 0.92 0.1 0.9 0.88 0.05 −4 −2 µ 0 0 −4 −2 0 µ Figure 8.2: Purity r and entanglement entropy SL as a function of the chemical potential µ. The results indicate that the ground state is more entangled deep inside the superfluid phase (µ = 0) than at the phase transition point (µ = −4). Notice that the more entangled the ground state is, the larger the differences between results obtained with D = 2 and D = 3 (see also Fig. 8.1). To do this, we compute the purity for a single site, as defined in Chapter 3. Recall the convention - if the site is unentangled with the rest of the system, then its purity, r, is 1. As the entanglement between a site and the rest of the system increases, the purity decreases. Fig. 8.2 shows the purity r as a function of the chemical potential. In the insulating phase (µ ≤ −4), the ground state of the system consists of a vacancy on each site. In other words, it is a product state, r = 1. Instead, For µ > −4 the ground state is entangled. Several comments are in order: (i) The purity r(µ) for D = 3 is smaller than that for D = 2 by up to 3%. This is compatible with the fact that the a PEPS/TPS with larger bond dimension D can carry more entanglement. (ii) Results for D = 2, 3 seem to indicate that the ground state is more entangled (r is smaller) deep into the superfluid phase (e.g. µ = 0) than at the continuous quantum phase transition µ = −4. This is in sharp contrast with the results obtained e.g. for 80 The Hard-Core Bose-Hubbard Model the 2D quantum Ising model [JOV+ 08], where the quantum phase transition displays the most entangled ground state. However, notice that in the Ising model the system is only critical at the phase transition whereas in the present case criticality extends throughout the superfluid phase. Each value of µ in the superfluid phase corresponds to a fixed point of the RG flow. That is, in moving away from the phase transition we are not following an RG flow. Therefore, the notion that entanglement should decrease along an RG flow[LLRV05], as observed in the 2D Ising model, is not applicable for the HCBH model. (iii) Accordingly, we expect that the iPEPS results for small D become less accurate as we go deeper into the superfluid phase (that is, as we approach ρ = 0.5). This is precisely what we observe: the curves ρ0 (ρ) for D = 2 and D = 3 in Fig. 8.1 differ most at ρ = 0.5. We also compute the entanglement entropy for the reduced density matrix 3L (L = 1, 2, 4) corresponding to one site, two contiguous sites and a block of 2 × 2 sites respectively. The entanglement entropy vanishes for an unentangled state and is non-zero for an entangled state. The curves S(3L ) confirm that the ground state of the HCBH model is more entangled deep in the superfluid phase than at the quantum phase transition point. 8.3.3 Correlations We compute the two point correlation function C(s), C(s) ≡ $a†i ai+sx̂ # − $a†i #$a[i+sx̂] # , (8.7) for pairs of sites separated by s lattice spacings along the horizontal direction x̂. For points in the gapless superfluid phase, one expects to see correlation functions that decay polynomially with the s. One can see in fig. 8.3, C(s) for PEPS representations of a superfluid ground state, µ = 0. Just as for the Ising model, we obtain correlation functions that decay exponentially in s, but as D increases, we see a tendency towards a polynomial decay and agreement for small s. The results show that while for short distances s = 0, 1, 2 the correlator C(s) is already well converged with respect to D, for larger distances s the correlator still depends significantly on D. This seems to indicate that while the iPEPS algorithm provides remarkably good results for local observables already for affordably small values of D, a larger D might be required in order to also obtain accurate estimates for distant correlators. 8.3 Results 81 0 10 D=2 D=3 −1 10 D=4 −2 C(s) 10 −3 10 −4 10 0 2 4 6 8 10 12 s Figure 8.3: Two-point correlation function C(s) versus distance s (measured in lattice sites), along a horizontal direction of the lattice. For very short distances the correlator for D = 2, 3, 4 are very similar whereas for larger distances they differ significantly. 8.3.4 Fidelity Given two ground states |ΨGS (µ1 )# and |ΨGS (µ2 )#, corresponding to different chemical potential µ, the fidelity per site f [ZB07], defined through 1 ln |$ΨGS (µ1 )|ΨGS (µ2 )#| , N →∞ N ln f (µ1 , µ2 ) = lim can be used as a means to distinguish between qualitatively different ground states [ZPac06, ZB07]. In the above expression, N is the number of lattice sites and the thermodynamic limit N → ∞ is taken. Importantly, the fidelity per site f (µ1 , µ2 ) remains finite in this limit, even though the overall fidelity |$ΨGS (µ1 )|ΨGS (µ2 )#| vanishes. In a sense, f (µ1 , µ2 ) captures how quickly the overall fidelity vanishes. As explained in Chapter 5, the fidelity per site f (µ1 , µ2 ) can be easily computed within the 82 µ2 The Hard-Core Bose-Hubbard Model 5 1 4 0.9 3 0.8 2 0.7 1 0.6 0 0.5 −1 0.4 −2 0.3 −3 0.2 −4 0.1 −5 −5 −4 −3 −2 −1 0 1 2 3 4 5 0 µ 1 Figure 8.4: Fidelity per lattice site f (µ1 , µ2 ) for the ground states of the HCBH model. Notice the plateau f (µ1 , µ2 ) = 1 (white) for µ1 , µ2 ≤ −4 (also for µ1 , µ2 ≥ 4) corresponding to the Mott insulating phase, and the pinch point at µ1 , µ2 = −4 (also at µ1 , µ2 = 4) consistent with a continuous quantum phase transition. framework of the iPEPS algorithm [ZOV08]. In the present case, before computing the overlap each ground state is rotated according to eiϕσz /2 , where ϕ is the random condensate phase of Eq. 8.6. In this way all the ground states have the same phase ϕ = 0. The fidelity per site f (µ1 , µ2 ) is presented in Fig. 8.4. The plateau-like behavior of f (µ1 , µ2 ) for points within the separable Mott-Insulator phase (µ1 , µ2 ≤ −4 or µ1 , µ2 ≥ 4) is markedly different from that between ground states in the superfluid region (−4 ≤ µ1 , µ2 ≤ 4), where the properties of the system vary continuously. Moreover, similarly to what has been observed for the 2D quantum Ising model [ZOV08] or in the 2D quantum XYX model [LLZ09], the presence of a continuous quantum phase transition between insulating and superfluid phases in the 2D HCBH model is signaled by pinch points of f (µ1 , µ2 ) at µ1 = µ2 = ±4. That is, the qualitative change in ground state properties across the critical point is evidenced by a rapid, continuous change in the fidelity per lattice site as one considers two ground states on opposite sides of the critical point and moves away from it. 8.3 Results 83 <Ho> −0.5 −0.55 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 <H> −0.54 −0.55 −0.56 ρ 1 0.5 0 ρ0 0.22 0.21 0.2 Time Figure 8.5: Evolution of the energies $H0 # and $H#, the density ρ, and condensate fraction ρ0 after a translation invariant perturbation V is suddenly added to the Hamiltonian. 8.3.5 Time evolution Using a slight modification on the algorithm for imaginary-time evolution, we can simulate the Hamiltonian evolution of PEPS states. A first example of such simulations with the iPEPS algorithm was provided in Ref. [ODV09], where an adiabatic evolution across the phase transition of the 2D quantum compass orbital model was simulated in order to show that the transition is of first order. The main difficulty in simulating a (real) time evolution is that, even when the initial state |Ψ(0)# is not very entangled and therefore can be properly represented with a PEPS/TPS with small bond dimension D, entanglement in the evolved state |Ψ(t)# will typically grow with time t and a small D will quickly become insufficient. Incrementing D results in a huge increment in computational costs, which means that only those rare evolutions where no much entanglement is created can be simulated in practice. For demonstrative purposes, here we have simulated the response of the ground state 84 The Hard-Core Bose-Hubbard Model |ΨGS # of the HCBH model at half filling (ρ = 0.5 or µ = 0) when the Hamiltonian HHC is suddenly replaced with a new Hamiltonian H given by H ≡ HHC + γV, V ≡ −i * #) ak − a†k , (8.8) k where γ = 0.2 and, importantly, the perturbation V respects translation invariance. As the starting point of the simulation, we consider a PEPS/TPS representation of the ground state with bond dimension D = 2, obtained as before through imaginary time evolution. Fig. 8.5 shows the evolution in time of the expectation value per site of the energies $HHC # and $H#, as well as the density ρ and condensate fraction ρ0 . Notice that the expectation value of H should remain constant through the evolution. The fluctuations observed in $H#, of the order of 0.2% of its total value, are likely to be due to the small bond dimension D = 2 and indicate the scale of the error in the evolution. The simulation shows that, as a result of having introduced a perturbation V that does not preserve particle number, the particle density ρ oscillates in time. The condensate fraction, as measured by |$al #|2 , is seen to oscillate twice as fast. 8.4 Conclusion In this chapter we have initiated the study of interacting bosons on an infinite 2D lattice using the iPEPS algorithm. We have computed the ground state of the HCBH model on the square lattice as a function of the chemical potential. Then we have studied a number of properties, including properties that can be easily accessed with other techniques [BBM+ 02], as is the case of the expected value of local observables, as well as properties whose computation is harder, or even not possible, with previous techniques. Specifically, using a small bond dimension D = 2, 3 we have been able to accurately reproduce the result of previous computations using SSE and SW of Ref. [BBM+ 02] for the expected value of the particle density ρ, energy per particle ( and condensate fraction ρ0 , throughout the whole phase diagram of the model, which includes both a Mott insulating phase and a superfluid phase, as well as a continuous phase transition between them. Interestingly, in the superfluid phase the PEPS/TPS representation spontaneously breaks particle number conservation, and the condensate fraction can be computed from the expected value of the annihilation operator, ρ0 = |$al #|2 . 8.4 Conclusion 85 We have also conducted an analysis of entanglement, which revealed that the most entangled ground state corresponds to half filling, ρ = 0.5. This is deep into the superfluid phase and not near the phase transition, as in the case of the 2D quantum Ising model[JOV+ 08]. Furthermore, inspection of a two-point correlator at half filling showed much faster convergence in the bond dimension D for short distances than for large distances. Also, pinch points in plot of the fidelity f (µ1 , µ2 ) were consistent with continuous quantum phase transitions at µ = ±4. Finally, we have also simulated the evolution of the system, initially in the ground state of the HCBH model at half filling, when a translation invariant perturbation is suddenly added to the Hamiltonian. Now that the validity of the iPEPS algorithm for the HCBH model (equivalently, the quantum XX spin model) has been established, there are many directions in which the present work can be extended. For instance, one can easily include nearest neighbour repulsion, V2 )= 0, (corresponding to the quantum XXZ spin model) and/or investigate a softer-core version of the Bose Hubbard model by allowing up to two or three particles per site. 86 The Hard-Core Bose-Hubbard Model Chapter 9 The Quantum Potts Model 9.1 Introduction The classical Potts model was introduced as a generalisation of the classical Ising model to an arbitrary number of local spin components [Pot52]. Instead of being confined to i merely two directions, the spin at site i is free to exist in one of q directions θni = 2πn , q where ni = 0, 1, 2, ..., q − 1. The general form of the classical Potts Hamiltonian is # H= Jij , (9.1) <ij> where Jij describes some nearest-neighbour coupling between spins. Potts considered two forms for Jij . In the first, he chose: . Jij = − cos θni − θnj (9.2) This so-called planar Potts model or clock model is invariant under the action of the Zq , where the global symmetry group, corresponding to a cyclic rotation of each spin by 2πk q constant k = 0, 1, ..., q − 1. It is obvious that for q = 2, the Ising Hamiltonian is recovered. Potts determined the critical temperature for the model on the square lattice for q = 3 and q = 4 but was unable to progress any further. Domb suggested a simplification to the model, where the coupling takes a second form: Jij = −δ (ni , nj ) (9.3) For this model, Potts was able to determine critical temperatures for all values of q and it became known as the standard Potts model. The four-component instance of this 87 88 The Quantum Potts Model model had previously been studied by Ashkin and Teller[AT43], and for this reason the generalization is sometimes called the Ashkin-Teller-Potts model. The Potts model is of interest for several reasons. Firstly, it extends the same general behaviour as the Ising model for all q. In two or more dimensions, the system exists in a ferromagnetic state at low temperatures, where the symmetry of the Hamiltonian is broken. As temperature increases, the system undergoes a phase transition to a disordered paramagnetic state with the symmetry restored. Secondly, many properties of the classical system are known exactly in two dimensions (D = 1) [Bax73], meaning that the rich character of the phase transition can be explored and compared for different q, providing broad insight into the theory of phase transitions. Thirdly, the system has been seen to have wider application in areas such as lattice statistics and percolation theory [KF69, Wu78, KW78, Wu82], studies of dilute spin glasses [Wu82] and in the investigation into self-dual lattice gauge theories [KPSS80] and the structure of QCD [SY82, BB07]. A particularly interesting aspect of the Potts model is the way in which the nature of the phase transition changes with q. In the classical Potts model on the square lattice, the phase transition is continuous for q ≤ 4 and first-order for q > 4. Whilst there exists no analytical solution in three spatial dimensions (D = 2) the system has been explored with Monte Carlo [BBD08, ABV91] and series expansion [PYJM06] techniques and the phase transition is thought to be continuous for q = 2, weakly first-order for q = 3 and first-order for q ≥ 4. Treating q as a continuous parameter leads to the concept of a ‘critical q’, qc , where the change from a first-order to continuous phase transition occurs [Wu82]. For example, for the 2D classical Potts model, it is known that qc = 4. The exact value of qc is not known for the 3D classical Potts model, but has been estimated as qc ≈ 2.6 ± 0.1 [KS81]. In this chapter, we apply the iPEPS algorithm to the quantum Potts model in two spatial dimensions (D = 2). The universal behaviour of this model is the same as the 3D classical model, and hence we expect the same characterisation of the phase transition with q. Our interest is as much in profiling the physical characteristics of the quantum Potts model as it is to benchmark the iPEPS algorithm on systems with continuous and first-order phase transitions. In Chapter 7, we studied the quantum Ising model (q = 2 quantum Potts model). There, we observed that the presence of quasi-long-range correlations near the phase transition had profound algorithmic implications. In particular, we observed the following (see fig. 9.1 Introduction 89 7.3): 1. The most dramatic improvement in the estimation of the order parameter with increasing D occurred near the phase transition. 2. Use of the full environment in the PEPS update improved the estimation of the order parameter when compared with the simplified update, again most dramatically near the phase transition. 3. Mean-field theory produced results that agreed very poorly with PEPS, apart from far from the phase transition. These observations only confirmed what we already knew about continuous quantum phase transitions. As the system approaches criticality, the amount of quasi-long-range correlations in the ground state increases. This meant that a purely local approximation (MFT) was unable to describe the physics of the ground state. Furthermore, an imaginarytime evolution of an iPEPS guided by mostly local information (the simplified update) performed significantly worse than an evolution guided by the full information of the state. In this chapter, we aim to assess the relative performance of the full-update, the simplified update and mean-field theory for the quantum Potts model. 9.1.1 First-order phase transitions The identifying characteristics of a first-order phase transition include: 1. A sharp phase transition corresponding to an energy eigenvalue crossing. A transition between phases well separated in phase space, and with substantial uniformity within each phase. 2. A ground state energy that is discontinuous in its first derivative with respect to the Hamiltonian parameter at the phase transition. 3. A non-diverging correlation length, ξ. These have considerable implications on how we approach the system with PEPS, and what we might expect to observe in the simulation results. The first point poses well 90 The Quantum Potts Model known problems for numerical investigation of first-order phase transitions. The two phases of the system at zero temperature are not connected at any order of perturbation theory, nor are they connected in the normal sense of a Metropolis walk in quantum Monte Carlo simulation. For such techniques, the location of the eigenvalue crossing is often either poorly estimated or not detected at all. As such, our treatment of such a system with tensor networks demands special consideration. Traversing the phase diagram by say increasing λ and reusing evolved PEPS ground states as subsequent starting points means that we will likely stay in a given phase and be unable to detect the phase transition. Even using a random PEPS as an initial point will naturally favour one phase over the other close to the phase transition. For this reason, we approach the phase transition from above and below - firstly using an initial state we know to be in the disordered phase and computing the ground states for decreasing magnetic field, then using an initial state we know to be in the ordered phase and computing the ground states for increasing magnetic field. Plotting the ground state energy obtained from each of the two approaches should give rise to the discontinuity in the first derivative of the energy stated in the second point. The third point suggests that quasi-long-range correlations will not be as significant in describing the ground state. So even though the local physical dimension, q is greater than in the spin-1/2 Ising model (and hence the computational cost for a given bond-dimension, D, is higher), the correlation length of the ground state is finite and the system may be comparatively better characterised for small D. Furthermore, we may suggest that this model will be relatively well described by the simplified update outlined in section 5.3.1, where locally correlated states are favoured in the imaginary-time evolution. Finally, we expect the weakly first-order q = 3 phase transition to be less emphatic in its first-order characteristics than the q = 4 model and to show some tendencies towards a second-order phase transition. In the following sections, we firstly define the quantum Potts model and then aim to determine the phase diagram for q = 3 and q = 4. For both values of q we will compute the energy of the system at varying q and its first derivative to i) locate the phase transition and ii) observe in each case that it is first-order in nature. Thereafter, we will compute relevant local observables, including the order parameter. Finally, we will examine the fidelity diagram, correlation function and entanglement properties of the system. 9.2 The quantum Potts Model 9.2 91 The quantum Potts Model The quantum Potts model typically refers to a quantum mechanical version of the standard Potts model. As a generalisation of the quantum Ising model, the quantum Potts Hamiltonian can be written in the form, Hpotts q−1 # 1 ## i j =− σx,k σx,q−k − λz σzi q <i,j> k i (9.4) where λz is again an external magnetic field. In line with the discussion 1 0 0 w Ω = 0 0 . . .. .. 0 0 in [SP81] we define the following 0 0 ... 0 0 0 ... 0 2 w ... 0 , M = 0 . .. .. . . .. . . . 1 0 . . . wq−1 where ω = e −1. 2πι q , and ι = √ operators, 0 0 0 .. . 0 0 ... 0 1 0 0 .. . 0 1 0 .. . ... ... ... ... (9.5) For a representation diagonal in the coupling, we make the replacements, k σx,k = Ω , σz = q−1 # Mk (9.6) k Alternatively, we can transform to a basis where the external field operator is diagonal, in which case, q − 1 0 ... 0 0 −1 . . . 0 k (9.7) σx,k = M , σz = R = .. . . . .. . .. . . 0 0 . . . −1 The Hamiltonian is Zq invariant under a cyclic rotation of the basis. In the representation given by eqn. 9.6, this is proven by the commutator: 5 6 Hpotts , M k = 0, ∈ k = 0, 1, ...q − 1 (9.8) 92 The Quantum Potts Model λz,pt D=3 D=4 D=5 D=6 D=3 lowest energy simplified simplified simplified simplified CTMRG D=3 CTMRG and D = 6 simplified 0.8732 0.8732 0.8718 0.8716 0.8749 0.8722 Table 9.1: The location of the phase transition for various versions of the algorithm and different D. The ’lowest energy’ solution is taken from the lowest energy ground states on either side of the phase transition. 9.3 q = 3 Results In fig. 9.1 we plot the energy for the q = 3 Potts model. We show results for a mean-field theory calculation, as well as an iPEPS with D = 3, D = 6 using the simplified update and D = 3 with the full environment. For each D, we record the energy crossing point in table 9.1. Comparing the first derivative of the energy curves as we approach from above and below we can quantify to what extent the phase transition is first-order. In fig. 9.2 we plot the derivative as calculated by a finite difference method. The black dotted vertical line shows the position of the phase transition as determined from the energy crossing and it is clear for all values of D the first derivative is discontinuous at the phase transition. For the D = 6 results, the magnitude of the discontinuity at the phase transition is ≈ 0.207. We define an order parameter for a q-level Potts system as, 7 8 q−1 8 1 # : i ": j " Θ=9 σx,k σx,q−k , q−1 k (9.9) where i and j are adjacent sites. This quantity is non-zero when the Zq symmetry of the Hamiltonian is broken. For the q = 3 Potts model, we have plotted the order parameter and magnetization in the direction of the magnetic field in figs. 9.3 and 9.4 respectively. At the phase transition we see a discontinuous jump in both quantities. Moreover, we see a good convergence with D in the simplified PEPS results, and good agreement with those generated with the full environment. Examining the insets, it can be seen that it is only in the neighbourhood of the phase transition that the full environment yields noticeably different local observable properties. This contrasts greatly with the results for the quantum Ising model where there was little convergence amongst the results obtained 9.3 q = 3 Results 93 −1.2 simplified D = 3 MF Energy per lattice site −1.4 −1.6 −1.8 −2 −2.2 −2.4 −2.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 λ Z −4 12 x 10 simplified D = 3 λZ = 0.8732 ΔEnergy per lattice site 10 8 6 4 simplified D = 6 λZ = 0.8716 2 0 −2 −4 −6 −8 0.871 simplified D = 3 simplified D = 6 CTMRG D = 3 0.8715 0.872 CTMRG D = 3 λZ = 0.8749 lowest energy (CTMRG D = 3 & simplified D = 6) λZ = 0.8722 0.8725 0.873 0.8735 0.874 0.8745 0.875 0.8755 0.876 λZ Figure 9.1: (above) The energy of the Potts model, showing mean-field results and results for a PEPS using the simplified update with D = 3. At this scale, the energies for D = 4, 5 and 6 ground states computed with the simplified update, or for D = 3 ground states computed with the full environment (CTMRG), are indistinguishable. (below) A magnified picture, with simplified update and CTMRG results, and energies relative to the D = 3 CTMRG results. These results clearly depict a first-order transition as we approach the transition from above (dotted) and below (dashed). The points where the two curves cross marks the phase transition. We have labeled such points for a few values of D and also the point corresponding to the crossing of the lowest energy solutions for magnetic fields above (simplified PEPS, D = 6) and below (CTMRG, D = 3) the phase transition. 94 The Quantum Potts Model −1.45 simplified D = 3 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 3 −1.5 −1.55 dE/dλZ −1.6 −1.65 −1.7 Phase transition −1.75 −1.8 −1.85 0.865 0.87 0.875 0.88 0.885 λZ Figure 9.2: Plot showing the first derivative of the energy per lattice site with respect to the external field, λZ , as determined by a finite difference method. The vertical dotted line marks the phase transition. The derivative is clearly discontinuous at this point. The dashed plots show the trajectory of the derivatives after the crossing. 9.3 q = 3 Results 95 simplified D = 3 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 3 MF 1 0.8 0.6 0.7 Θ 0.6 0.5 0.4 0.4 0.3 0.2 0.2 0.1 0 0.8 0.82 0.84 0.86 0.88 0.9 0 0.2 0.4 0.6 0.8 1 1.2 λZ Figure 9.3: Plot showing the order parameter of the q = 3 Potts model as a function of external field, λZ . with the simplified update, and it was only with the full environment that our results agreed well with other numerical results. Furthermore, the Potts model mean-field theory estimation of the order parameter - which involves no entanglement - is comparatively much closer to the best PEPS estimate. This suggests there is far less entanglement in the ground state of the q = 3 quantum Potts model than the quantum Ising model. Fidelity Diagram The fidelity diagram for the q = 3 Potts model is shown in fig. 9.5. In comparison with the corresponding diagram for the Ising model (fig. 7.5), the fidelity diagram of the q = 3 Potts model exhibits a sharper drop off at the phase transition. This clearly represents a first-order transition between two phases with quite different macroscopic properties. By contrast, in the quantum Ising model, we observed a smooth roll-off around the phase 96 The Quantum Potts Model 2 simplified D = 3 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 3 MF 1.5 σ Z 1 1.9 0.5 1.8 1.7 1.6 1.5 0 1.4 1.3 0.8 −0.5 0 0.2 0.4 0.6 0.82 0.8 0.84 1 0.86 0.88 1.2 1.4 λZ Figure 9.4: Plot showing the magnetization of the q = 3 Potts model in the direction of the magnetic field, as a function of λZ . transition and a characteristic pinching at the phase transition. Correlation Functions For the q = 3 Potts model, we consider the following spatial correlation function given in [FFGP07], ; q−1 < ) * # ' ' ' 1 i+lx i C /lx = σx,k σx,k (9.10) (q − 1) k ' Where l x represents some displacement from the site i in the horizontal direction. In figure 9.6 we plot the correlation function for the D = 3 ground states computed with the full environment, along a row of the lattice for separations of up to 20 sites. The correlation length along a row can be defined as in [Bax82], ξ= ln 1 ) *, Λ1 Λ2 (9.11) 9.3 q = 3 Results 97 1 1 1 1.05 0.95 0.95 1 0.8 0.9 0.95 0.9 λ2Z 0.9 0.85 0.6 0.85 0.85 0.8 0.75 0.7 1.5 0.4 0.8 0.2 0.75 0.8 1.5 1 1 0.75 0.5 0.5 0 λ2Z 0 0 λ1Z 0.7 0 0.2 0.4 0.6 0.8 1 λ1Z Figure 9.5: Fidelity diagram for the q = 3 Potts model, computed from D = 3 ground states evolved with the simplified update. where Λ1 and Λ2 are the eigenvalues of the column transfer matrix of largest and secondlargest magnitude respectively. It should be noted that if our system possessed a continuous phase transition, the transfer matrix would be degenerate in its first and second eigenvalues at the critical point. This means that the correlation length is infinite at the phase transition in, for example, the quantum Ising model. In the inset of figure 9.6, we plot the correlation length against the magnetic field for D = 3 ground states computed with the full environment and χ = 10 (blue), 20 (red) and 30 (green). One can see that the correlation length appears finite and well converged in χ at the phase transition, where ξ ≈ 0.56. However, since we have not computed ground states for higher D and the full environment, we cannot say our results are converged in D. As such, our results for the correlation function and correlation length can only be considered a first approximation. Entanglement Entropy The entanglement entropy for a pair of neighbouring sites is shown as a function of the external field, λZ in fig. 9.7. Here, the solid lines trace the entanglement entropy of the lowest energy eigenstate of the Hamiltonian. The dotted lines indicate the trajectory of the entropies for the two sectors after the spectral crossing. The phase transition is marked by the vertical dotted black line. Here it can be seen that there is a sharp jump in the entanglement entropy between the two lowest eigenstates at the phase transition. Furthermore, the transition point does not correspond to a crossing of the entanglement 98 The Quantum Potts Model 0 10 0.7 0.6 −1 ξ 10 0.5 0.4 −2 C(| i − j |) 10 0.8 0.81 0.82 0.83 0.84 λZ 0.85 0.86 0.87 −3 10 −4 10 λZ = 0.5 λZ = 0.7 λZ = 0.8 −5 10 λZ = 0.85 λZ = 0.87 λZ = 0.872 −6 10 0 1 10 10 No. lattice sites, | i − j | Figure 9.6: Two point correlation function of the q = 3 Potts model for various values of the external field, λZ . These have been computed from D = 3 ground states computed with the full environment and χ = 30. (inset) The correlation length of the q = 3 Potts model, as defined in equation 9.11 against λZ for D = 3 ground states computed with the full environment and χ = 10, 20 and 30. entropies of the two eigenstates, nor does it represent a point at which these entropies peak. In the inset, we see a slightly increased entropy for the CTMRG solution. As we move away from the phase transition, the CTMRG ground state entropy converges with the simplified update entropy. 9.4 q = 4 Results We now perform the same simulations for the q = 4 Potts model. We again use the simplified update to generate ground states corresponding to PEPS states with D = 4, 5 and 6. Additionally, we perform a simulation guided by the full environment (CTMRG) 9.4 q = 4 Results 99 0.45 0.4 0.35 0.4 0.3 0.3 S 2 0.25 0.2 0.2 0.1 0.15 0.82 0.84 0.86 0.88 0.9 0.92 0.94 simplified D = 3 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 3 0.1 0.05 0 −0.05 0 0.2 0.4 0.6 0.8 1 1.2 1.4 λZ Figure 9.7: Entanglement entropy of the q = 3 Potts model. The dashed lines plot the trajectory of the entropy after the phase transition. for D = 4. The energy plot is shown in fig. 9.8. It is quite evident here that there exists a first-order transition, with a discontinuity in the first derivative of the energy at λZ,pt ≈ 0.61. In the inset of fig. 9.8, one can see once again that the first derivative of the dotted and solid energy lines differ at the crossing. Moreover, in contrast to the q = 3 transition in fig. 9.1, we see that the results for different D are almost indistinguishable. The system appears well converged for a simplified update with D = 4. The first derivative of the energy is plotted in fig. 9.9. At the phase transition, marked by the vertical black dotted line, the difference in the magnitude of the derivative immediately above and below the transition is ≈ 0.694. This exceeds the same result for the q = 3 Potts model, supporting the accepted notion that phase transition in the q = 3 Potts model is weakly first order in comparison. The order parameter and magnetization in the direction of the external field are shown in figs. 9.10 and 9.11 respectively. Once again we observe that the mean-field solution is very close to the PEPS results. 100 The Quantum Potts Model −1.4 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 4 MF Energy per lattice site −1.6 −1.8 −2 −2.2 −1.85 −2.4 −1.9 −2.6 −1.95 −2 −2.8 0.58 0.59 −3 0 0.1 0.2 0.6 0.61 0.62 0.63 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λZ Figure 9.8: (above) The energy of the q = 4 Potts model, showing mean-field results and results for a simplified PEPS with D = 4, 5, 6. One can see that even magnified, the results for different D are almost indistinguishable. The results here also seem to agree well with the mean-field theory results. Fidelity Diagram The fidelity diagram for the q = 4 Potts model is shown in fig. 9.12. In comparison to the equivalent figure for the q = 3 model, one can see that the drop off around the phase transition is much more severe, and that surface for λ1Z , λ2Z < λZ,pt is much flatter. This is in line with a phase transition that is strongly first-order. Correlation Functions In figure 9.13, we plot the correlation function for the q = 4 Potts model. Inspection of the correlation function shows the q = 4 correlator decaying slightly faster than for q = 3. In particular, it can be seen that at the phase transition, the q = 4 correlation function appears to decay faster than the q = 3 correlation function. The inset shows a good convergence of the correlation length (as defined in eqn. 9.11) with χ and a final 9.4 q = 4 Results 101 −1.4 sPEPS D = 4 sPEPS D = 5 sPEPS D = 6 PEPS CTMRG D = 4 −1.6 −1.8 dE/dλZ −2 −2.2 Phase transition −2.4 −2.6 −2.8 −3 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 λZ Figure 9.9: The first derivative of the energy-peer-site of the q = 4 Potts model with respect to the magnetic field, λZ , as calculated by a finite difference method estimate of ξ ≈ 0.51, a value smaller than that found for q = 3. However, once again we cannot say that our results are converged in D and as such the accuracy of this result is quite uncertain. Entanglement Analysis The entanglement entropy for a pair of neighbouring sites, S2 , is plotted against the external field, λZ , in fig. 9.14. The magnitude of S2 at the phase transition is comparable to the q = 3 result in fig. 9.7. However, compared to the q = 3 plot, the entropy varies less with increasing D (see inset). This once again suggests that the D = 4, result for the q = 4 quantum Potts model is very close to the ground state. 102 The Quantum Potts Model 1 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 4 MF 0.9 0.8 0.7 <σz> 0.6 0.5 0.9 0.4 0.8 0.3 0.7 0.6 0.2 0.5 0.1 0 0 0.1 0.55 0.2 0.3 0.6 0.4 0.5 0.6 0.7 0.8 0.9 1 λz Figure 9.10: A plot of the order parameter of the q = 4 Potts model 3 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 4 MF 2.5 <σz> 2 1.5 3 1 2.5 0.5 2 1.5 0 0.55 −0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.6 0.7 0.8 0.65 0.9 1 λz Figure 9.11: A plot of the magnetization in the direction of the magnetic field for the q = 4 Potts model 9.4 q = 4 Results 103 1 1 1 0.9 0.95 1.1 0.95 0.8 0.9 0.9 1 0.7 0.85 0.8 0.8 0.85 0.6 λ2Z 0.9 0.5 0.8 0.4 0.7 0.75 0.75 0.3 0.7 1 0.7 0.2 1 0.5 0.5 0 λ2Z 0 0.65 0.1 0.65 0 0.6 0 λ1Z 0.2 0.4 0.6 0.8 1 λ1Z Figure 9.12: The fidelity diagram for the q = 4 Potts model, using the simplified update D = 4 ground states. 0 10 0.6 −1 0.5 ξ 10 −2 0.3 10 C(| i − j |) 0.4 0.2 −3 10 −4 0.52 0.54 0.56 λZ 0.58 0.6 λZ = 0.51 10 λZ = 0.55 λZ = 0.59 −5 10 λZ = 0.6 λZ = 0.61 −6 10 0 1 10 10 No. lattice sites Figure 9.13: Two point correlation function of the q = 4 Potts model for various values of the external field, λZ for ground states computed with the full-environment and D = 4. (inset) The correlation length of the q = 4 Potts model, as defined in equation 9.6 against λZ for χ = 8, 16 and 24. 104 The Quantum Potts Model 0.6 simplified D = 4 simplified D = 5 simplified D = 6 CTMRG D = 4 0.6 0.5 0.4 0.4 0.2 0.3 S2 0 0.55 0.6 0.65 0.2 0.1 0 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λz Figure 9.14: Entanglement entropy for the q = 4 Potts model. The dashed lines show the trajectory of the entropies after the transition. 9.5 Conclusion The results of this chapter in conjunction with those from Chapter 7 provide some interesting insights into the PEPS algorithm and its application to first-order and continuous quantum phase transitions. These results reinforce the central idea in tensor network theory - that the computational difficulty of simulating a quantum system is deeply connected to the degree and nature of entanglement in the system. In this chapter, we have considered the quantum Potts model with increasing local dimension, q, and somewhat paradoxically determined that the computational resources required to describe the ground state of such systems reduces with increasing q. The explanation for this lies in the fact that the ground states of these systems possess vastly different structures of entanglement. In the quantum Ising model, the phase transition is continuous and at the phase transition the system is critical and the ground state possesses a large amount of entanglement. The spatial correlations decay as a power law. 9.5 Conclusion 105 i) ∆ Ising (q = 2) Potts q = 3 Potts q = 4 ) dE dλZ 0 0.207 0.694 * ii) Estimated ξ ∞† 0.56 0.51 iii) Difference in order parameter, at PT iPEPS vs MFT 0.63 0.26 0.18 iv) Percentage error in location, of PT iPEPS vs. MFT 31.2% 14.7% 1.8% Table 9.2: A summary of results for the quantum Ising model and q = 3 and q = 4 quantum Potts models. One can see that i) the discontinuity in the first-derivative of the energy increases with q, whilst ii) the correlation length decreases. In iii) and iv), it can be seen that as q increases, the accuracy of mean-field theory results improve, suggesting that the amount of entanglement in the ground state is decreasing. † An infinite correlation length can not be reproduced by a PEPS computation of the kind suggested in this thesis, however it is accepted that the quantum Ising model has a second-order phase transition, and hence a diverging correlation length. The results for the quantum Potts model correlation length are not converged in D, and should only be seen as indicative of the possible behaviour of ξ with q. As a result, the system is rather poorly characterised by mean-field theory. Due to the long-range nature of correlations in the ground state, an effective PEPS simulation of this system requires the full environment in the tensor update. By contrast, as q increases, the phase transition of the system changes to being of a first-order character, and the amount of entanglement in the system decreases greatly. The systems become progressively better described by mean-field theory as q increases, and for a PEPS solution, the simplified update becomes increasingly effective, significantly reducing the computational burden of accurately describing the ground state. In table 9.2 we present a final summary of the results obtained for the quantum Potts model with PEPS. For each q, we record i) the magnitude of the discontinuity in the first derivative of the energy at the phase transition, ii) the correlation length, iii) the difference in the order parameter as estimated by iPEPS and MFT at the iPEPS phase transition and iv) the percentage error in the location of the phase transition as estimated by iPEPS and MFT. In these key metrics, one observes that as q increases, the system takes on an increasing first-order nature, and a corresponding decrease in the scale of correlations and increase in the effectiveness of MFT results. 106 The Quantum Potts Model Chapter 10 The J1-J2 Model 10.1 Introduction The study of geometrically frustrated spin systems remains a great challenge in computational quantum many-body physics. Quantum Monte Carlo - a very effective method for non-frustrated systems - suffers from the well-known sign problem when applied to frustrated systems [HS00]. Furthermore, exact techniques such as exact diagonalization are still limited by poor computational cost scaling with system size. In light of these difficulties, determining the effectiveness of tensor network approaches to such problems is of considerable consequence. Geometrically frustrated systems occur when the Hamiltonian contains terms that compete with each other in such a way that the energetic minimization of one or more terms is in disagreement with the energetic minimization of other terms, as a result of the nature of the interactions and the geometric structure of the system. Consider the simple classical example shown in fig. 10.1. Here, we have three sites arranged on a triangle. Each pair of sites interact in accordance with either a ferromagnetic (fig. 10.1i) or antiferromagnetic Ising interaction (fig. 10.1ii). One can see that in the ferromagnetic case, we can simultaneously minimise the energy contributions from each of the links without conflict. In the anti-ferromagnetic case, we observe that minimising with respect to the link A-B means that we find it difficult to choose the appropriate spin orientation for site C. Whilst this simple treatment is illustrative of the problem, it does not fully reproduce its complexity. Considering the statistical properties of such a system in the thermodynamic limit at finite temperature [Wan50], one begins to sense the immensity 107 108 The J1 -J2 Model Figure 10.1: A simple example of a frustrated system. The three sites in i) are acted upon by a ferromagnetic interaction. The minimum energy state is straightforward. The three sites in ii) are acted upon by an anti-ferromagnetic interaction. Whilst it is easy to minimise the energy for the link A-B, it is not clear then what state spin C should take. of the problem. For frustrated quantum spin systems at zero temperature, quantum fluctuations give rise to enormously complex descriptions of the ground state [ML04]. In modern condensed matter physics, frustrated models such as the XY model and Heisenberg antiferromagnet on the triangular lattice [And73, JPPU80], the Heisenberg Kagome antiferromagnet [Sac92] and the Shastry-Sutherland model [SS81] continue to generate great interest. For example, in studying the phase diagram of oxide superconductors, Anderson connected magnetic frustration to the mechanism of high-Tc superconductivity [And87]. More generally, frustrated systems arouse the possibility of systems with rich phase diagrams containing exotic and elusive types of ground states, and this alone motivates their study. 10.2 The J1 -J2 Hamiltonian 10.2 109 The J1-J2 Hamiltonian In this chapter, we study the J1 -J2 model, also known as the frustrated Heisenberg antiferromagnet on the square lattice, one of the most fundamental models of a frustrated quantum spin system. The Hamiltonian for the J1 -J2 model is, # # H = J1 Si · Sj + J2 Si · Sj <i,j> (10.1) <<i,j>> /i = (S x , S y , S z ). The J1 term describes an isotropic antiferromagnetic nearest where S i i i neighbour interaction and the J2 term describes a next-to-nearest neighbour interaction. To elucidate the competing forces, consider a classical treatment of the model at zero temperature. For JJ21 < 0.5, the J1 term dominates and the system lies in a Néel order ground state with (π, π) periodicity. For JJ12 > 0.5, the J2 term dominates and the system breaks into two diagonal sub-lattices each with its own Néel order. Globally the system is said to exist in a collinear phase, with (π, 0) or (0, π) periodicity. At JJ21 = 0.5 a classical critical point exists. We wish to study the effect of quantum fluctuations on this picture. Furthermore, we wish to study the suitability of PEPS for simulating such a system. Previous investigations have helped develop something of a general picture for the phase diagram. In the limits of low- JJ21 and high- JJ12 , we expect two respective phases with Néel and collinear order. These phases are depicted in fig. 10.2. In each case, the global SU(2) symmetry of the Hamiltonian is broken down to a U(1) symmetry, and the phases may be distinguished by the momentum space representation of the spin-spin correlation function [MVC09], 1 # i,q(,rk −,rl ) = / / > S(/q ) = 2 e Sk · Sl (10.2) N kl where /rk and /rl are the spatial lattice vectors of two sites k and l. For the Néel ordered state, S(/q ) peaks at /q = (π, π), whilst for the collinear state, S(/q ) peaks at either /q = (0, π) or /q = (π, 0). S(/q ) is known as a structure factor, as computing it for these values of /q can tell us whether our state has Néel or collinear order. In between these phases, the increased effects of frustration make it difficult to classify the ground state, although finite-size scaling of exact diagonalization results has suggested that at JJ12 ≈ 0.35-0.4, the system undergoes a continuous phase transition from the Néel 110 The J1 -J2 Model Figure 10.2: (left) An example of a Néel order state. The momentum space representation of the spin-spin correlation function peaks at /q = (π, π). (right) An example of a collinear state. Here, since the diagonal interactions dominate, the system splits into two Néel ordered sub-lattices. Here, the red spins form one sub-lattice, marked by the pink dotted lines, and the black spins form another sub-lattice, marked by the grey dotted lines. S(/q ) peaks at /q = (0, π) or /q = (π, 0). In this example we show the /q = (0, π) state. ordered state to a paramagnetic state, which remains until a first-order transition to the co-linear ordered ground state at around JJ21 ≈ 0.6-0.65. A general picture of the phase diagram is shown in figure 10.3. There have been several candidate ground states suggested for this paramagnetic region, including: 1. A spin-liquid with no broken translational or rotational (SU(2)) symmetries.[FKK+ 90] 2. A dimerized ground state, where the translational symmetry is broken, but the SU(2) symmetry preserved.[GSH89, SN90, SWHO99, ZU96, KOSW99] 3. A twisted, or spiral ordered ground state.[DM89] 4. A chrial (parity breaking) ordered ground state.[SZ92] 10.2 The J1 -J2 Hamiltonian 111 Figure 10.3: The generally accepted phase diagram for theJ1 -J2 model. We aim in this chapter to present a first picture of what an iPEPS treatment of the system suggests for the J1 -J2 model in the thermodynamic limit. It is generally thought that spin-liquid ground states will be difficult to converge to with a PEPS simulation, due to the fact that the PEPS algorithm quite systematically breaks translational symmetries by imposing a multi-site unit-cell structure. Additionally, there has been much emphasis on novel crystalline structures in the intermediate region, and since this is a preliminary study of the model, we will restrict ourselves to answering a few select questions. Firstly, we wish to determine if the ground state phase diagram computed with iPEPS displays an intermediate paramagnetic region. If so, we wish to determine approximately where the boundaries of such a region are located. Lastly, we wish to investigate whether the intermediate region is characterised by any of the suggested dimerized ground state patterns. Results for the finite J1 -J2 model with PEPS, for lattices with up to 14 × 14 spins and open boundary conditions have already been published in [MVC09]. The authors there detected some evidence of a dimer ground state in the intermediate region, but could not reach a definite conclusion. In the following sections, we will define two of the most commonly suggested dimer patterns. Then, we will describe some important algorithmic considerations, before presenting our results for the phase diagram. 112 10.3 The J1 -J2 Model Columnar and Plaquette Ordered Ground States Many authors have suggested the existence of ground states that resemble a regular arrangement of two-body singlets. The most common types of order suggested are the columnar dimer order and the plaquette resonating valence bond (RVB) order. We illustrate these two basic foundational states in fig. 10.4. In some schemes, such as the reduced subspace exact diagonalization in [MLPM06], the ground states are computed and their properties are checked for agreement with the properties of these candidate ground states. Series expansion schemes [SWHO99] have gone even further, and taken as a starting point a Hamiltonian that gives rise to one of the candidate ground states. That is, the J1 -J2 Hamiltonian is considered in the form (10.3), Hλ = Hα + λHβ (10.3) where the ground state of Hα is either the columnar dimer or plaquette RVB state, and as λ is increased from 0 to 1, we recover the J1 -J2 Hamiltonian. In this iPEPS study, we initialise our imaginary-time evolution with both random initial states and initial states biased toward some dimer order. We also define schemes for which the structure of the tensor network quite deliberately favours a dimerized state in a columnar arrangement. 10.4 Algorithmic Considerations The presence of a next-to-nearest neighbour interaction in our Hamiltonian poses a challenge for our iPEPS formalism. Firstly, due to the diagonal interactions, it is clear that the minimal unit cell will be a 2-by-42 block A-B-C-D. More importantly, in other models we have studied, there has been a PEPS bond allocated to every term in the Hamiltonian. Enforcing this for the J1 -J2 model requires that each PEPS tensor has 8 shared bonds. Each tensor then contains D8 d components, and contracting the environment for such a network is more complex and more expensive computationally. A standard iPEPS algorithm, using the full environment update is simply not affordable. For this reason, we needed to consider modifications to this approach. Specifically, these are: 1. A square (4-bond) iPEPS and the full environment update. The Hamiltonian is expressed in four-body terms operating on a 2x2 plaquette. For each update, the 10.4 Algorithmic Considerations 113 Figure 10.4: The suggested VBC states. i) A columnar dimer arrangement. ii) A plaquette RVB arrangement iii) The plaquette RVB for a single plaquette in terms of nearestneighbour singlets four PEPS tensors are updated based on the environment surrounding the plaquette. This scheme is of broader interest, as it can be used for Hamiltonians with general 4-site plaquette interactions. 2. Using a PEPS with 8 shared bonds on each tensor, but using the simplified update scheme to perform imaginary time evolution. 3. A simplified update scheme for a square PEPS (i.e. 4 shared bonds) with an extension allowing the update of diagonal links. This approach was used in a study of 2D fermionic models with next-nearest neighbour interactions. [CJV10] 4. Joining adjacent pairs of sites in a way that would be favourable if the ground state possessed columnar order, and once again using 4-body plaquette terms. 114 The J1 -J2 Model Figure 10.5: The four alternative PEPS variants we have chosen. i) A square PEPS with updates on 2x2 plaquettes. There are four distinct plaquettes, each one marked by a different colour. The Hamiltonian is written in terms of four-site plaquette operators hp1 , hp2 , hp3 and hp4 . ii) An iPEPS with a bond for each Hamiltonian interaction. To minimise the computational load, the simplified update is used and so each link bears a diagonal λ matrix. iii) A square PEPS updated by a simplified update capable of handling interactions between next-nearest neighbours. iv) A scheme designed to favour a columnar ordered ground state. Here, pairs of sites are grouped into tensors in a columnar fashion. Bonds along a column are of dimension D1 , whereas bonds connecting columns are of dimension D2 . The Hamiltonian is written in terms of four-site operators. There are two flavours of update, those on columns (red) and those between columns (yellow). 10.4 Algorithmic Considerations 115 These are shown and in figure 10.5. For scheme 1 (10.5i), the cost of computing the environment (via the CTM method) at each timestep scales as χ3 D6 + χ2 D6 d - the same as for a PEPS with A-B-B-A periodicity. This means that environments of D = 2, 3 and 4 PEPS should be able to be computed with the method in reasonable time. However, the four body conjugate-gradient update scales with a one-time cost of χ3 D7 d2 + χ2 D10 d4 + D8 d8 , followed by a per-iteration cost of χ3 D6 +χ2 D8 +D10 d4 . This means that simulating with a D = 4 PEPS becomes excessive. Furthermore, since correlations between nextnearest neighbour sites flow along horizontal and vertical links of the PEPS, the correlation carrying bonds become quickly saturated. As a result, it was seen that results for D = 2 converged poorly and that for this scheme the only reasonable value was D = 3. For scheme 2 (10.5ii), the simplified update of the PEPS scales as D9 d2 + D3 d4 , but to compute observables, one needs to develop an alternative CTMRG scheme to take into account renormalization of diagonal bond indices. This can be done at a cost per CTMRG step of χ3 D18 + χ2 D15 d, which is excessive, and so we use a reduced tensor rank scheme which scales as χ3 D12 + χ2 D13 d. Furthermore, the cost of calculating a four-site reduced density operator from the A-B-C-D unit-cell scales as χ3 D20 + χ2 D24 d + χ2 D16 D8 . Even though this only needs to be performed once, it is still a very costly computation and as such the scheme is only computationally viable for a bond dimension D = 2. For scheme 3 (10.5iii), the cost of each diagonal update scales as D6 d4 +D5 d6 . This allows one to converge PEPS for comparatively large values of D. The one-time computation of the environment scales as χ3 D6 + χ2 D6 d. The computation of four-site reduced density operators scales as χ3 D6 d4 + χ2 D4 d8 . For this scheme, one can converge and compute observables for up to D = 6. The disadvantage of this scheme is that it cannot reproduce long-range correlations as each update is effectively local. We saw in Chapters 7 and 9 that the degree to which this effects the accuracy of results depends on the scale of entanglement in the system and, by virtue of this, on the order of any quantum phase transitions. Scheme 4 (10.5iv) is biased towards representing the correlations present in states with a columnar order and so will work best if this is the type of order in the ground state. It uses an environment to update the PEPS, with each step of the CTMRG algorithm scaling once again as χ3 D6 + χ2 D6 d2 . The four-site reduced density operator can be computed with leading order cost χ3 D6 d4 +χ2 D8 d5 +χ2 D4 d8 . Once again, the Hamiltonian is written in four-site terms with the most costly update being for the sites bounded by the yellow 116 The J1 -J2 Model Scheme 1 2 3 4 Cost of contracting environment or computing reduced density matrix Environment: S1 (χ3 D6 + χ2 D6 d) Four-site RDM† : χ3 D6 d4 + χ2 D4 d8 Environment† : S2 (χ3 D12 + χ2 D13 d) Four-site RDM† : χ3 D20 + χ2 D24 d + χ2 D16 D8 Environment† : S3 (χ3 D6 + χ2 D6 d) Four-site RDM† : χ3 D6 d4 + χ2 D4 d8 Environment: S4 (χ3 D6 + χ2 D6 d2 ) Four-site RDM† : χ3 D6 d4 + χ2 D8 d5 + χ2 D4 d8 Cost of performing update χ3 D7 d2 + χ2 D10 d4 + D8 d8 +L1 (χ3 D6 + χ2 D8 + D10 d4 ) D9 d2 + D3 d4 D6 d4 + D5 d6 D8 d12 + χ2 D10 d8 + χ3 D8 +L4 (χ2 D8 + χ3 D6 + D10 d8 ) Table 10.1: Leading order computational cost of four PEPS schemes for the J1 -J2 model. Operations that need only be computed once per simulation are marked with † . The parameters S and L represent the number of steps for contracting the environment and performing an iterative update respectively. Note that there is not necessarily equivalence between the parameter D across the schemes. For instance, as scheme 2 dedicates a bond to every Hamiltonian interaction, it is likely that it will require a relatively smaller value of D to represent the correlations in the system. However, it also possesses a much higher computational cost scaling in D. square in fig. 10.5iv). During this conjugate gradient routine, we incur a one-time cost of D8 d12 + χ2 D10 d8 + χ3 D8 , plus a per-gradient computation cost of χ2 D8 + χ3 D6 + D10 d8 . The computational costs of the four schemes are summarised in Table 10.1. In figure 10.6 we plot a brief comparison of the results obtained from the techniques described above. One can see that in the intermediate phase, the simplified update produces the lowest energy results. One can also compare these results with the columnar (*) and plaquette RVB (+) series expansion results of [SWHO99] and see that the D = 6 PEPS produces an energy at the point J2 /J1 = 0.5 that is 0.5% lower than the columnar expansion result and 2% lower than the plaquette RVB expansion result. For these schemes, we also tried biasing the imaginary-time evolution towards the columnar dimer and plaquette RVB ordering by starting in the states given in fig. 10.4. However, initialising the states in such a way led to ground states with slightly higher energies. 10.5 Results 117 −0.45 Scheme 1, D = 3 Scheme 2, D = 2 Scheme 3, D = 6 Scheme 4, D1 = 4, D2 = 2 Energy per lattice site −0.5 −0.55 −0.47 −0.6 −0.48 −0.49 −0.5 −0.65 −0.51 0.45 −0.7 0 0.1 0.2 0.3 0.4 0.5 0.55 0.5 J2/J1 0.6 0.7 0.8 0.9 1 Figure 10.6: A plot comparing the energies given by the four PEPS algorithm variants. The + and * symbols mark the energies for series expansions around plaquette RVB and columnar dimer ground states respectively [SWHO99]. 10.5 Results 10.5.1 Energy and structure factors Using scheme 3 as our method for characterizing the J1 -J2 model, we derive the phase diagram for various values of D. In figure 10.7, we show the plot of the energy-per-site diagram for various values of D. We next plot the Néel (fig. 10.8) and collinear (fig. 10.9) structure factors in order to demonstrate that the results comply with the well-understood behaviour in the low-J2 and high-J2 limits. We use the Néel and collinear structure factors as defined in [MVC09]. For an infinite translationally invariant system, the structure factors reduce to: 118 The J1 -J2 Model Plot showing J1J2 energies vs J2 −0.45 D=3 D=4 D=5 D=6 −0.55 −0.485 −0.6 e0 Energy per lattice site, e0 −0.5 −0.49 −0.495 −0.65 3 4 5 6 D −0.7 0 0.2 0.4 0.6 0.8 1 J2/J1 Figure 10.7: A plot of the energy per lattice site vs JJ21 for various values of D. (inset) Convergence of the energy per lattice site with the bond dimension, D, at JJ21 = 0.5. ; 2 2 2 2 < # # # # 1 εxyx" y" Sxy ·Sx" y" S= 16 x=1 y=1 x" =1 y" =1 (10.4) where x, y, x’ and y’ define the relative row and column positions within some A-B" " C-D plaquette. For the Néel structure factor, εxyx" y" = ej(x+y−x −y )π . For the collinear " " structure factor, εxyx" y" = ej(x−x )π or εxyx" y" = ej(y−y )π , depending on whether the state breaks into a (0, π) or (π, 0) order. In our plot it can be seen that as J2 increases from 0, the Néel order reduces steadily and in the intermediate region the state displays little or no evidence of Néel order. Likewise, as we decrease J2 from J2 = 1, there is a steady reduction in the amount of collinear ordering in the ground state. For D = 5, it appears as though Néel order disappears by J2 = 0.55. However, for D = 6, remnants of Néel order reappear. The D = 6 plot 10.5 Results 119 0.16 D=3 D=4 D=5 D=6 0.14 0.12 S(π, π) 0.1 0.08 0.06 0.04 0.02 0 0 0.2 0.4 0.6 0.8 1 J2/J1 Figure 10.8: A plot of the Néel order parameter vs J2 J1 for various values of D shows a curious hump at around J2 = 0.4 that is not evident for other values of D. This behaviour is interesting as it is occurs in a region where a continuous phase transition is widely predicted. Some studies of the phase diagram predict as many as three continuous phase transitions between J2 = 0.34 and J2 = 0.5, [SOW01]. Recall that for the quantum Ising model, the simplified update estimated the critical magnetic field and the local observable properties in the neighbourhood of the phase transition poorly (see fig. 7.3). It is possible that this hump is evidence of a continuous transition, that - for a simplified PEPS scheme - would only be properly described with very large D. Figure 10.9 shows the collinear structure factor. For increasing D, it appears as though the collinear structure factor disappears at progressively higher values of J2 . A most interesting observation from our simulations is the dependence of the final approximation to the ground state on the initial state. As mentioned previously, initialising the simulation in one of the dimer states did not lead to an approximation to the ground state with a lower energy than a randomly initialised simulation. This suggests that the 120 The J1 -J2 Model 0.18 D=3 D=4 D=5 D=6 0.16 0.14 S(π, 0) 0.12 0.1 0.08 0.06 0.04 0.02 0 0 0.2 0.4 0.6 0.8 1 J2/J1 Figure 10.9: A plot of the co-linear order parameter vs J2 J1 for various values of D energy minimization becomes trapped in local minima. The magnetic order parameters of the ground states from each initialization agree that the system exists in a Néel ordered phase for low- JJ12 and a collinear ordered phase for high- Jj21 . In between, the magnetic structure factors decay and there is the possibility of a paramagnetic phase appearing for increasing D. For random initial states, the actual values of the magnetic observables can fluctuate over multiple runs of the energy minimization and a smoother picture is found by initializing a D = Q simulation with the PEPS found for D = Q − 1. However, this initialization also results in an energy that is generally slightly higher. Additionally, the update itself can be sensitive to a lattice rotational symmetry, or explicitly break the symmetry (see [CJV10]) but this seems to have no observable effect on the determined ground states. This all points towards the PEPS selecting from a rich set of states with similar energies. For the values of D that we can access, it appears as though the ground state picture is difficult to determine and that the imaginary-time evolution often becomes trapped in 10.5 Results 121 local minima. A more in-depth study may incorporate symmetries in order to restrict the ground states to subspaces of the Hilbert space invariant under global U(1) or SU(2) operations. However, as a first approximation, it can be said that our method produces a ground state energy in the intermediate region that is lower than that obtained by series expansion around suggested VBC configurations. So the question we might like to answer is: to what degree does our solution show properties consistent with a columnar or plaquette ordered VBC? 10.5.2 Columnar and Plaquette Order Parameters Here, we aim to use the order parameters suggested in [MVC09] to determine if the ground states we find in the intermediate region are of columnar dimer or plaquette RVB form. Columnar order To detect columnar order in the intermediate regime, we compute the spin-spin correlator /i · S /j for all neighbouring sites i and j. Columnar order would be evidenced by particular S parallel pairs of links having a much stronger correlation than other neighbouring links. We show in fig. 10.10 the nearest-neighbour correlation values for the links of an A-B-C-D plaquette for the D = 6 ground states. For low JJ12 , one can see that the Néel ordered ground state exhibits roughly equal correlation values on all vertical and horizontal links. For JJ12 = 1, one can see that the correlations along horizontal links are roughly equal and contribute negatively to the energy. Meanwhile, the correlations on vertical links contribute positively to the energy. This is indicative of (π, 0) collinear order. In between, there appears to be an abrupt change between the two magnetic orders consistent with a first-order phase transition, and little evidence of a columnar dimer order appearing. Plaquette order To detect a plaquette RVB state, we compute the expectation value of the cyclic permutation operator, . 1−1 QABCD = PABCD + PABCD (10.5) 2 where PABCD is a cyclic permutation operator on a 2x2 block of sites. For a pure plaquette state, the plaquette order parameter should be 1 on the four sites of the plaquette, and 1/8 elsewhere. Fig 10.10 shows the plaquette order parameters for various values of J2 . One can see that across the entire phase diagram, there is no evidence of a translational 122 The J1 -J2 Model symmetry breaking of the kind that would suggest a plaquette RVB type order. Instead, for a given JJ12 the plaquette order parameter is similar on each plaquette in the system. There is a steady increase in the plaquette order parameter from the Néel ordered state at JJ21 = 0 before suddenly dropping at JJ21 ≈ 0.6 in a manner consistent with a first order phase transition. 10.5.3 Entanglement Entropy The plot of the four-site entanglement entropy, averaged over all four plaquettes, is shown in fig. 10.11. It is difficult to extract a general picture of the J1 -J2 entanglement from this plot, other than to say that the entanglement entropy increases as J2 increases, and peaks at the point where the energy plot suggests a first-order phase transition. At J2 = 0.35 there again appears to be a slight rise in the entanglement entropy of the D = 6 results, which again could be a very early indication of a continuous phase transition in this region. 10.6 Concluding Remarks In this chapter, we studied the J1 -J2 model with the iPEPS algorithm. We presented energies for several versions of the algorithm, and found that for simulations of an acceptable duration, a version of the simplified PEPS algorithm for a square PEPS with diagonal interactions gave best results. One of our main objectives here was to test the hypothesis proposed elsewhere that in the regime where the system is most frustrated, the ground state exhibits a tendency towards a valence bond crystal ordering. We examined our ground states for tendency toward either a columnar dimer or plaquette RVB ordering, but could not see the onset of either, even when the system was coerced to favour such an order. Instead, our results favoured a first order phase transition from the Néel order ground state to the collinear ground state at J2 ≈ 0.6 − 0.625. 10.6 Concluding Remarks 123 Figure 10.10: A plot of the nearest-neighbour spin-spin expectation values and plaquette order parameters. The labels in near the links give the expectation value of the spin-spin /i · S /j , for each of the eight distinct links. The labels enclosed in the box correlator, S give the expectation value of the plaquette order parameter, QABCD , for each of the four distinct plaquettes. As can be seen, there is no tendency toward any VBC order, but an abrupt change from Néel order to collinear order. 124 The J1 -J2 Model 8 D=3 D=4 D=5 D=6 7.5 7 6.5 S4 6 5.5 5 4.5 4 3.5 3 0 0.2 0.4 0.6 0.8 1 J2/J1 Figure 10.11: A plot of the entanglement entropy for the J1 -J2 model. Chapter 11 Geometric Entanglement 11.1 Introduction How well can we represent the ground state of a strongly correlated quantum many-body system without using any quantum correlation at all? This insightful question lies at the root of a well established method in condensed matter physics: the mean field (MF) theory approach. Using product states as a first step to study quantum many-body systems can provide some qualitative and quantitative answers about the behavior of the system at hand. For an N -body system governed by a Hamiltonian H, the mean-field approximation to the ground state is the product state that minimises the energy, i.e. eH ≡ $ΦH |H|ΦH # = min $Φ|H|Φ# Φ (11.1) Here, |Φ# = |φ[1] # ⊗ |φ[2] # ⊗ · · · ⊗ |φ[N ] # is a product state of the N bodies and |ΦH # is the mean-field solution. While the energy obtained in this way is still higher than the true ground state energy (unless the ground state is a product state), it is the best possible estimation of the ground state energy that can be achieved just by using a product state. Nevertheless, this approach may not be satisfactory if the aim is to reproduce other properties of the system, such as local order parameters. An alternative question to ask, motivated by the notion of the fidelity of two quantum states, is: what is the product state that maximises the overlap with the ground state, 125 126 Geometric Entanglement |Ψ0 #? That is, we want to find the product state |ΦG #, such that Λmax (Ψ0 ) ≡ |$ΦG |Ψ0 #| = max |$Φ|Ψ0 #|. Φ (11.2) Here, we call |ΦG # the closest product state to |Ψ0 #. If we can determine this product state, we can ask several more interesting questions, such as: In what way does such a state differ from the mean-field solution, if at all? If they are different, how do their estimation of the energy and order parameter, among other quantities, compare? Answering such questions provides objective feedback on the ability of mean-field theory to describe strongly correlated systems. Also, recall from section 3.2.1 that the overlap between a given state and its closest product state gives rise to a uniquely macroscopic measure of entanglement - the so-called geometric entanglement of the state: log Λ2max (Ψ0 ) E(Ψ0 ) ≡ − , (11.3) N Here, a geometric entanglement, E = 0, occurs when the ground state, |Ψ0 #, is a product state and as E increases, the state is said to become more entangled [WDM+ 05, SOFZ10, OW09]. Thus, determining |ΦG # sheds further light on the structure of entanglement in the ground states of quantum many-body systems. In this chapter, we present an algorithm for computing |ΦG # for ground states of infinite 2D lattice models. Our approach is based around obtaining PEPS approximations, |Ψ0 (D)#, to the ground state for increasing bond dimension D and observing for convergent behaviour in the properties of |ΦG #. In particular, we present results for three models: i) the quantum Ising model on the hexagonal lattice ii) the quantum Ising model on the square lattice and iii) the q = 3 Potts model on the square lattice. Our approach is thus broken into two parts: 1. Finding PEPS ground states of infinite, translationally invariant Hamiltonians. 2. Maximising the overlap between |Ψ0 (D)# and a variational product state |ΦG #. We have extensively outlined the iPEPS algorithm for finding ground states of local Hamiltonians in Chapter 5. We will not review it in this chapter, except to say that since we are interested in the quantum Ising model, we make use of the CTMRG algorithm and the full environment to converge to the ground state (refer to fig. 7.3 for justification). For the second task, we are inspired by the fidelity-per-lattice-site outlined in [ZB07, ZPac06, 11.2 Computing the Closest Product State 127 ZOV08]. These works formalised the idea that the closeness of infinite, translationally invariant PEPS states could be captured in an intensive quantity. In the following sections, we describe our variational algorithm for determining |ΦG #, before presenting results for the three models we have studied. Our approach allows us to touch on a particularly interesting idea from quantum information theory - the relation between the monogamy of entanglement [CKW00, Ter04, OV06] and the connectivity in a quantum many-body system. Monogamy of entanglement states that entanglement in a many-body system is a shared resource. As the number of entangled bodies increases, the degree to which any two bodies are entangled decreases. Thus, one might expect for lattice systems that as the connectivity of the lattice increases, the geometric entanglement of the ground state decreases. We will provide some tentative numerical evidence suggesting that this is the case for a PEPS study of the quantum Ising system on the hexagonal and square lattices. In doing so, we reflect on results for 1D quantum systems [SOFZ10, OW09, OW10]. 11.2 Computing the Closest Product State We now outline a method to numerically compute |ΦG # efficiently. Assume that we have an infinite PEPS describing the ground state of a quantum lattice model. Assume that this ground state is invariant under shifts by one site, or equivalently that our PEPS is characterised by the same tensor A at each lattice site (see fig. 11.1a). We start our search for |ΦG # with a random infinite product state |Φ0 #. We make the assumption that our product state is also translationally invariant, i.e. |Φ0 # = |φ0 #⊗∞ . Such an assumption is vital from a computational point of view, but is not always theoretically sound. It is relatively easy to design a ground state for which the closest product state has periodicity on a longer scale than the ground state itself. However, our algorithms rely on translational invariance and so we make this restriction. In this form, |Φ0 # can be described by a D = 1 PEPS made up of tensors φ (see fig. 11.1b). To update |Φ0 #, we take the following steps: (i) We define the distance between our product state, |Φ0 #, and ground state, |Ψ0 #, by the square error, εSE , εSE = ||Ψ0 # − |Φ0 #|2 = $Ψ0 | Ψ0 # − $Φ0 | Ψ0 # − $Ψ0 | Φ0 # + $Φ0 | Φ0 # (11.4) 128 Geometric Entanglement Figure 11.1: The variational algorithm for finding the closest product state on the square lattice. For schematic clarity, we show the process for a system with periodic boundary conditions. a) The PEPS is defined by a single tensor A with four bond . b) The product state is defined by the vector φ. c) The tensor a formed as the contraction of A with the conjugate of φ. d) The updated (unnormalised) product state tensor µ may be expressed as the contraction of a tensor network that contains a at every site except one. [m] [m] (ii) We choose to make a local modification to our product state at site m, |φ0 # → |φ1 #, [m] SE such that the square error is minimised. That is, we find the state φ1 such that ∂ε[m]∗ = 0. [m] φ1 ∂φ1 It can be shown that an unnormalised solution for is given by the vector µ1 as computed by the contraction of the 2D tensor network in figure 11.1d. ? [m] (iii) We obtain the normalised solution, |φ1 # = |µ#/ $µ|µ# and form a new translationally invariant product state, |Φ1 # = |φ1 #⊗∞ . At each step k, the fidelity-per-lattice site lim log((ΦNk |Ψ0 )) is computed by the same proN →∞ cedure as outlined in [ZOV08]. Our algorithm iterates until this quantity converges. 11.3 Results 11.3 129 Results Our simulations have been performed for the following 2D models in the thermodynamic limit: (i) Quantum Ising model in a transverse field, # # " H=− σx[,r] σx[,r ] − λz σz[,r] , (, r ,, r" ) (11.5) , r [, r] on the hexagonal and square lattices. In the above equation, σα is the αth Pauli matrix at site /r of the 2D lattice. According to quantum Monte Carlo calculations, the system undergoes a quantum phase transition from a Z2 -symmetric phase to a broken phase at critical points λz,c ∼ 2.13 for the hexagonal lattice and λz,c ∼ 3.04 for the square lattice [CA80]. (ii) Quantum 3-Potts model, * #) # [, r ] [, r " ]2 [, r ]2 [, r" ] H=− σx σx + σx σx − λz σz[,r] , (, r ,, r" ) (11.6) , r where the Potts matrices at site /r are defined as 2 0 0 0 1 0 σx[,r] = 0 0 1 , σz[,r] = 0 −1 0 . 0 0 −1 1 0 0 (11.7) We have studied this model in the square lattice, which is known to undergo a weakly first order transition as a function of the transverse field λz [BBD08]. The different results that have been obtained by simulating the above models can be classified as follows: 11.3.1 Energy versus order parameter In Figs. 11.2, 11.3 and 11.4 we show our results for the energy and order parameters as computed with i) the MF approximation to the ground state, |ΦH # ii) the iPEPS approximation to the ground state, |Ψ0 (D)#, for several values of D, and iii) the closest product state to each of the iPEPS states, |ΦG (D)#. Remarkably, the two product state 130 Geometric Entanglement representations are seen to have very different properties. The MF state energy is closer to the PEPS energy than the closest product state. On the other hand, the closest product state in each case more accurately represents the order parameter (and its critical exponent), and in this sense one can say that it more accurately captures the phase diagram of the system. There are a few comments in order. Firstly, the two product states are least accurate when the system is most entangled. For instance, the MF state predicts the order parameter most poorly near the phase transition, where the quantum Ising model is critical. Similarly, the closest product state predicts the energy poorly in this region. By comparison, the same can be said for the quantum Potts model where, whilst there is no critical point, there has been witnessed an increase in entanglement entropy approaching the phase transition (see Chapter 9). However, in an overall sense, the product state representations do a much better job of representing the quantum Potts model than they do the quantum Ising model, and this may be attributed to the lesser degree of entanglement in the quantum Potts system. 11.3.2 Geometric entanglement and local fidelities In Fig. 11.5 we show our results for the density of global geometric entanglement E in the thermodynamic limit for different values of the bond dimension D. In the insets we show different local fidelities, calculated in the sense of [ZOV08], between the infinite PEPS for several D and the MF approximations |ΦH # and |ΦG (D)#. As a first comment, it should be noted that the results are not completely converged with D, and this is especially true at the phase transition. Moreover, for the hexagonal lattice, our results for D = 3 appear very slightly more entangled than the D = 4 results. This is most likely because the D = 4 PEPS could not be fully converged due to computation time restraints. Alternatively, it could be that the variational algorithm for finding |ΦG (D = 4)# becomes stuck in local minima. With this in mind, we will make some tentative observations about the results. Our simulations indicate that the density of global geometric entanglement peaks at the critical point. This is in contrast to the second-order quantum phase transitions in the 1D quantum Ising model, where the peak in the geometric entanglement is displaced with respect to the critical point, while its derivative is divergent at criticality [WDM+ 05]. 11.3 Results 131 −1.9 |Φ > H |Ψ(2)>,|Ψ(3)>,|Ψ(4)> |Φ (2)> −2 Energy per site G −2.1 |Φ (3)> G |ΦG(4)> −2.2 −2.3 −2.4 −2.5 1.95 2 2.05 2.1 2.15 2.2 2.25 2.3 2.35 2.4 2.2 2.25 2.3 2.35 2.4 λZ 0.8 0.7 Order parameter 0.6 0.5 |ΦH> 0.4 0.3 0.2 0.1 0 1.9 |Ψ(2)> |Ψ(3)> |Ψ(4)> |ΦG(2)> |ΦG(3)> |ΦG(4)> 1.95 2 2.05 2.1 2.15 λZ Figure 11.2: Hexagonal lattice quantum Ising model: (a) expectation value of the Hamiltonian, and (b) order parameter. 132 Geometric Entanglement −1.9 −2 Energy per site −2.1 −2.2 −2.3 |Φ > H −2.4 −2.5 |Ψ(2)>,|Ψ(3)>,|Ψ(4)> |Φ (2)> G |Φ (3)> G −2.6 |ΦG(4)> −2.7 2.5 2.6 2.7 2.8 2.9 3 3.1 3.2 3.3 3.4 3 3.1 3.2 3.3 3.4 λ Z 0.8 0.7 Order parameter 0.6 0.5 |ΦH> 0.4 |Ψ(2)> |Ψ(3)> |Ψ(4)> |ΦG(2)> 0.3 0.2 0.1 0 2.5 |ΦG(3)> |ΦG(4)> 2.6 2.7 2.8 2.9 λZ Figure 11.3: Square lattice quantum Ising model: (a) expectation value of the Hamiltonian, and (b) order parameter. For the expectation value of the Hamiltonian, the results obtained with |Ψ0 (2)# and |Ψ0 (3)# are indistinguishable in the scale of the plot. 11.3 Results 133 −1.4 |ΦH> |Ψ(3)> |ΦG(3)> Energy per site −1.6 −1.8 −2 −2.2 −1.75 −1.8 −1.85 −2.4 −2.6 −1.9 −1.95 0.86 0.5 0.87 0.88 0.6 0.89 0.9 0.7 0.8 0.9 1 1.1 λZ 1 |ΦH> 0.9 |Ψ(3)> |ΦG(3)> Order parameter 0.8 0.7 0.6 0.5 0.7 0.4 0.65 0.3 0.6 0.2 0.55 0.1 0 0.5 0.82 0.6 0.84 0.86 0.7 0.88 0.8 0.9 1 1.1 λZ Figure 11.4: Square lattice quantum 3-Potts model: (a) expectation value of the Hamiltonian, and (b) order parameter. 134 Geometric Entanglement Also, notice in the inset, that even though the closest product state has a higher overlap with the actual PEPS than the MF product state, the MF state still has a quite large local overlap (> 0.97) even close to the phase transition. 11.3.3 Monogamy of entanglement Examination of of figs. 11.2, 11.3, and 11.5(a,b), seems to indicate that MF approximations are more accurate in the case of the quantum Ising model on the square lattice than in the hexagonal one. This could be interpreted in the context of the monogamy of entanglement and as a consequence of the difference in coordination number, z. For the square lattice, each node has four nearest neighbors (z = 4), whereas for the hexagonal lattice each node has three (z = 3). The decrease in the density of geometric entanglement with the coordination number z can be seen in Figs. 11.5(a,b), and also in Table 11.1, where we show the value of E at the quantum critical point of the quantum Ising model in a 1D chain (from [WDM+ 05]) and the 2D hexagonal and square lattices. We record these results in table 11.1. These lend some support to the monogamy of entanglement as it applies to quantum lattice systems. However, a strong conclusion could only be reached by: 1. Obtaining results that are converged in D. 2. Obtaining results for lattices with higher coordination numbers. Unfortunately, due to the inhibiting computational cost scaling of the iPEPS algorithm with D, and the way in which this rapidly worsens with increasing z, such a comprehensive study seems out of reach at this point in time. critical E z = 2 (1D) 0.0631 z = 3 (2D hex.) 0.0285 z = 4 (2D sq.) 0.0226 Table 11.1: Density of global geometric entanglement at the critical point of the quantum Ising model for a 1D chain (z = 2), a 2D hexagonal lattice (z = 3), and a 2D square lattice (z = 4). 11.3 Results 135 a) 0.03 1 fidelity per lattice site 0.025 0.02 E 0.015 0.01 0.99 0.98 0.97 1.8 2 0 2.2 D=2 D=3 D=4 2.4 λ 0.005 Z 0 0.5 1 1.5 2 2.5 λZ b) 0.025 D=2 D=3 D=4 0.02 E 0.015 0.01 fidelity per lattice site 1 0.995 0.99 0.985 0.98 0.975 0.97 2.6 0.005 0.5 2.8 1 3 λZ 1.5 3.2 3.4 2 2.5 3 3.5 λZ Figure 11.5: (a) Geometric entanglement for the hexagonal lattice quantum Ising model. In the inset, we show the local fidelity between |Ψ0 (D)# and |ΦG (D)# for D = 2, 3 and 4 and between |Ψ0 (2)# and |ΦH #; (b) Geometric entanglement for the square lattice quantum Ising model. In the inset, we show the local fidelity between |Ψ0 (D)# and |ΦG (D)# for D = 2 and 3 and between |Ψ0 (2)# and |ΦH # 136 11.4 Geometric Entanglement Concluding Remarks In this chapter we described an algorithm for computing the closest product state, |ΦG #, to an iPEPS representation of a ground state. We then studied the properties of such a state in the context of three quantum lattice models. This allowed us to make several interesting observations. Firstly, we saw that |ΦG # has quite different observable properties to the mean-field theory solution, |ΦH #. However, it should be noted that |ΦG # is not an alternative to mean-field theory as it cannot be computed from a Hamiltonian description of the system. Obtaining |ΦG # relies on having an existing PEPS representation of the ground state and inaccuracies in this PEPS representation may well manifest in |ΦG #. Secondly, we saw that the effectiveness of product states in representing the ground state properties depends on the amount of entanglement present in the ground state. Ground states with relatively little entanglement were well described by both mean field theory and the closest product state. On the other hand, critical systems were comparatively poorly approximated. This idea is formalised by a macroscopic measure of entanglement known as the geometric entanglement. Lastly, we saw some results for lattice systems studied with the techniques outlined in this chapter, suggesting that the peak geometric entanglement for the quantum Ising model decreases with increasing coordination number. This could be interpreted as evidence supporting the notion of the monogamy of entanglement in the context of quantum lattice systems. Chapter 12 Conclusion 12.1 Thesis Review This thesis was comprised of three major sections. In the Introduction and Chapters 2 and 3, we gave some historical context to the problem of simulating many-body lattice systems and reviewed some foundational ideas. In particular, Chapter 2 described a well-known correspondence between quantum systems in D dimensions and classical systems in D + 1 dimensions. This established that, whilst the scope of this document was largely focused on developing algorithms for quantum systems in two spatial dimensions, the techniques discussed could also be applied to classical systems in three spatial dimensions. Chapter 3 presented some basic definitions on tensor networks and the way in which they are commonly manipulated. Further to this, we introduced many-body entanglement in the context of tensor networks and motivated their use for representing ground states of certain Hamiltonians. The next section of the thesis involved the development of an algorithm for determining the ground state of infinite quantum systems in two dimensions and techniques for extracting relevant physical information. Our reasoning followed a hierarchical philosophy, firstly showing in Chapter 4 that finding the ground state of a point particle is trivial for a classical computer. The idea then was that problems in higher dimensions could be approximately solved, inheriting this computational simplicity. One-dimensional quantum problems could be approximately cast as a series of zero-dimensional problems, and twodimensional quantum problems could be approximately cast as a series of one-dimensional problems. Such an approach helps to overcome the exponential cost associated with 137 138 Conclusion exactly computing ground states of quantum systems. In Chapter 5, we described the basic stages of our iPEPS algorithm in terms of these notions and then demonstrated the power of the ansatz, by showing efficient means of computing a range of useful physical quantities. The final section of this work validated the algorithm by applying it to a range of real physical problems. There were three main motivations. Firstly, we wanted to show that our algorithm for approximately contracting 2D tensor networks was effective, and that the degree of the approximation can be improved by changing a refinement parameter, χ. We did this in Chapter 6 by studying the Classical Ising model on the square lattice. The results obtained from our infinite-MPS and CTMRG network contraction algorithms were benchmarked against Onsager’s analytical solution. In doing so, we validated a key component of the iPEPS algorithm. Chapters 7 and 8 determined the ground state of the quantum Ising model and hard-core Bose-Hubbard model respectively. Here we saw that our results closely followed those obtained from finite-size scaling QMC studies of the models, which in this regime are seen as near exact results. Additionally, we computed a range of physical quantities that gave an insight into the structure of entanglement across the phase diagram, including standard entanglement measures, fidelity diagrams and spatial correlation functions. Finally, the scale of correlations in the ground state was seen to be of great importance. Specifically, in the quantum Ising model, the results generated with the full environment gave far better approximations to the ground state than those generated with the simplified update. In Chapter 9, we studied the q-state Potts model for q = 3 and q = 4. Our motivation in this chapter was to determine how effectively PEPS could simulate systems with firstorder phase transitions, and in particular to see if PEPS could reproduce the increasingly first-order nature of the phase transition with increasing q. Comparing our results with those for the quantum Ising model, it was seen that the simplified update was very effective in describing the properties of the ground state. Additionally, by computing the first derivative of the ground state energy, fidelity diagrams and a comparison with the mean-field solutions, we demonstrated the characteristic behaviour of the quantum Potts model with increasing q. Chapter 10 provided a first treatment of the J1-J2 model on the infinite lattice. This frustrated model is out of reach for standard QMC methods due to the sign problem. We tried several different PEPS schemes, eventually determining that a modified version of 12.2 Final Comments 139 the simplified update returned ground states with the lowest energy. Recent studies have focused mostly on the possibility of a valence bond crystal ground state in the intermediate J2 regime, where the competition between Hamiltonian terms is most significant. TypJ1 ically, authors have presumed that the ground state possesses either a columnar dimer or resonating valence bond structure, and performed either perturbation analysis or a restricted subspace diagonalization to confine their solution to such an ordering. Interestingly, our study failed to detect either of these two valence bond crystal structures in the maximally frustrated regime, whilst finding lower energy approximations to the ground state. This should be interpreted as an indication of the enormous competition between low-energy states with a different characteristic order in the J1 -J2 model, and a measure of the continuing difficulty facing numerical methods in solving frustrated many-body systems. Chapter 11 presents a study of the geometric entanglement of two-dimensional systems with different lattice geometries. Here we introduced the notion of the closest product state in context of PEPS and described a variational algorithm for computing it. Doing this for different systems presented an interesting picture of the effectiveness of mean-field theory and the role of quantum entanglement in describing strongly correlated lattice systems. Finally, we suggested that the peak geometric entanglement could be compared for lattices with differing coordination number, and that such an approach could be used to numerically justify the idea of the monogamy of entanglement for quantum lattice systems. 12.2 Final Comments As a final comment, it is important to put PEPS and tensor networks in an appropriate place in the context of numerical physics. The work in this thesis built upon the idea that entanglement is the main source of complexity in quantum many-body systems, and that for many systems of interest this entanglement is limited by fundamental physical laws. Tensor networks networks extend the central idea of DMRG - that in choosing how to approximate quantum many-body states, one should account for the entanglement in a systematic way. This paints tensor networks in a different light to other numerical methods. For example, QMC in its simplest form samples the wavefunction without consideration of the structure of entanglement in the state. 140 Conclusion The main strength of tensor networks is the richness with which they describe many-body states. The algorithm produces a representation of the state that can be manipulated to return a wealth of information. In this thesis alone we have demonstrated computation of local observables, correlation functions and various entanglement measures, and compared states by computing characteristic fidelity measures. Furthermore, tensor network states can be tailored to better represent states with certain suggested properties. For example, recent work has focused on the tensor network description of quantum states that possess global or gauge symmetries. In this light, TNs are a powerful and highly configurable ansatz for representing quantum states. The major problem for PEPS and all present tensor network approaches to two and higher dimensional systems is the extremely high computational cost scaling in the bond dimensions. To put this in perspective, if one takes the current iPEPS algorithm and assumes that the computational throughput of modern computers continues to scale with Moore’s law, then a desktop computer running a D = 10 iPEPS simulation with the full environment is some 25-30 years away. Whilst there may be some room for optimization and parallelization of the algorithms, it is likely that the success of these algorithms will depend on gaining greater understanding of the structure of entanglement in many-body systems. PEPS is not alone in its predicament. All numerical approaches to solving the many-body problem are presently held back by intrinsic difficulties, raising fundamental physical, philosophical and biological arguments about the degree to limits to which humans even armed with computers - are able to describe Nature. No one numerical approach has emerged as a consensus method for studying many-body systems, but by using them together it is possible to look at a problem from several points of view. This is the spirit in which PEPS should currently be appreciated. How successful PEPS ultimately becomes as a tool for studying many-body physics is not possible to predict, but at the very least it presently provides a fresh perspective on many problems of interest and more generally on the role of entanglement in quantum many-body systems. Appendix A Infinite MPS Methods for Computing the Environment In Chapter 4, we discussed methods of approximately contracting infinite, translationally invariant 2D tensor networks. Such a procedure is an important part of the iPEPS algorithm. In particular, it is required to compute the environment, which in turn can be used to compute observables and various other properties of quantum states. The first method for contracting 2D TNs set about describing the boundary state by an infinite MPS. Then, the MPS was evolved under the action of gates formed from the tensors (see fig. 4.7). In Chapter 4, we skipped over the low level details of this technique. In this appendix, we describe a solution to the problem of evolving an infinite MPS under the action of a repeating set of two-body gates. This corresponds, for example, to the evolution of the diagonal boundary state for the square tensor network in fig. 4.7. We then go on to describe how this and similar algorithms can be used to compute the environment in the iPEPS algorithm. The first part of this appendix details the approach developed by Orús and Vidal [OV08], which the author helped to implement and which formed the basis of the original iPEPS algorithm [JOV+ 08]. The approach is described in detail as it establishes for the reader some important ideas in tensor network theory. In particular, it reproduces the theoretical framework underpinning the iTEBD algorithm for finding the ground state of 1D systems and justifies why it does not perform well for general gates. This augments the discussion of Chapter 4, shedding further light on the comparative ease of approximately computing the ground states of 1D quantum systems due to the availability of a canonical form, and the necessary modifications needed to compute the environment of 2D quantum systems. 141 142 Infinite MPS Methods for Computing the Environment ! " Figure A.1: The operation of the two-body gates a and b on the boundary MPS !ϕ[0] . Here the accumulated action of the ’a’ (’b’) gates is represented by the transfer matrix TA (TB ). A.1 Problem Overview The general problem is outlined in fig. A.1. We assume that our initial boundary state, ! [0] " !ϕ , is translationally invariant under shifts of two sites. It is described by an infinite [0] [0] [0] [0] MPS composed of four repeating tensors, ΓC , λC , ΓD and λD . The λ matrices in this representation are diagonal operators. The boundary state is operated on by alternate sets of gate tensors, where each gate is labeled a or b. For now, we assume that these gates are completely arbitrary in their coefficients and make no connection between the gate and the PEPS tensors. A single step of the boundary evolution involves the application of a row of a gates, followed by a row of b gates, obtaining a new boundary MPS of the same structure as in fig. A.1. We call these rows TA and TB respectively. Applying each of these gates k times, ! " [k] [k] [k] we obtain the evolved boundary state !ϕ[k] and its constituent tensors ΓC , λC , ΓD and [k] λD . Applying the gates an infinite number of times, we obtain the dominant eigenvector |Φ# as described below: ! ! " " |Φ# = !ϕ[∞] = lim (TB TA )k !ϕ[0] , (A.1) k→∞ ! " : ! " where Φ ! ϕ[0] )= 0. Thus, if we obtain a general procedure for finding !ϕ[1] = ! " TB TA !ϕ[0] , we can iterate this procedure until we see convergence in the boundary state. In fact, since the structure of TA and TB is identical apart from a translation of one lattice ! " site, we need only develop a method for computing the half-step evolution |ϕ!! # = TA !ϕ[0] . A.1 Problem Overview 143 Unfortunately, an exact general solution for this eigenvalue problem is computationally intractable as the number of coefficients required to represent the boundary state MPS can increase exponentially step upon step. We instead define an efficient method for approximating the the boundary MPS as one that meets the following criteria: 1. The number of coefficients required to describe the infinite boundary MPS is bounded by some fixed upper limit at any stage. 2. If the exact evolved boundary MPS is represented with more coefficients than this limit, there exists a method for computing a justifiably good approximation to this. In the first half of this appendix, we will establish an approach for finding such an approximation by deriving some properties of the boundary MPS and its evolution under general, non-unitary gates. Lemma I: The bond dimension of the boundary MPS may increase exponentially in the number of steps unless truncated. Proof of this establishes that exactly evolving the boundary state is computationally [0] [0] [0] inefficient. Consider the operation of the gate a on the tensors ΓC , λC , ΓD as is fig. A.2. The bond indices of these tensors are bounded by the parameter χ. After contracting the tensors and the gate into a (maximally) rank-(χD2 ) tensor M , and splitting M by SVD, some of the bonds are now of dimension χD2 . Iterating this procedure with gates operating on alternating links, one can see that the maximal bond dimension scales as χD2N , where N is the number of steps performed. Lemma II: If the MPS is in the canonical form, the optimal update of the state after the application of a single gate, g, is given by a local decomposition. The canonical form of the MPS prescribes that the Schmidt form may be reproduced ! " about any λ operator. That is, our MPS, !ϕ[k] is of the form: ! [0] " # ! lef t " !! right > !ϕ = λαα !ψα !ψβ (A.2) α ! ! " " where the bases !ψαlef t and !ψαright are orthonormal by construction. As an example, consider the MPS in fig. A.3i. If this MPS is in the canonical form, then we obtain 144 Infinite MPS Methods for Computing the Environment Figure A.2: The exact evolution of the boundary state with an MPS may lead to an increase in the MPS bond dimension. Here, a gate is contracted with MPS tensors having physical indices of dimension D and bond indices of dimension χ. Splitting the resulting tensor leads to an interjoining link of dimension up to χD2 . the orthonormal bases for the left and right sections by contracting all the tensors to [0] the left and right of λC . A special property of an MPS in the canonical form is that the contraction of the left or right section with its conjugate along the physical indices ! lef t " !ψα . We say returns the identity, a direct result of the orthonormality of the states ! > = ! that the scalar product matrix Mαα" = ψαlef" t ! ψαlef t is the identity. In a general sense, this matrix corresponds to the left eigenvector of the structure surrounded by the purple box in fig. A.3ii. We can efficiently compute it by starting with a random vector and operating on it with this structure until convergence. An important property of an orthonormal basis is that we can transform it under the action of some unitary operator, and the orthonormality is preserved. In fig. A.3iii we ! " $ show the state Uγα !ψαlef t also gives rise to an identity scalar product. α Consider that we have a state |ϕ# that we know to be in the canonical form, that has bond dimensions bounded by χ. We also assume without loss of generality that |ϕ# is normalised. We want to prove that the canonical form is helpful in computing the state |ϕ! # (see fig. A.4ii), which is the best approximation to the state g [C,D] |ϕ# (see fig. A.4i) that can be achieved without any bond dimension exceeding of |ϕ! # exceeding χ. Formally, we want to maximise the fidelity: f (ϕ! ) = $ϕ! | g [C,D] |ϕ# $ϕ! | ϕ# The numerator in this expression is shown as a tensor network in fig. A.4iii. (A.3) A.1 Problem Overview 145 Figure A.3: Overview of Lemma 2, part 1. i) The MPS can b expressed in a bipartite form by contacting together all of the tensors to the left and right of some λ tensor. ii) If the MPS is in the canonical form, then the bases of each partition are orthonormal and corresponding scalar product matrix is the identity. iii) The orthonormality of each basis is not disturbed by the application of some unitary operator on the open bond. The canonical form allows us to make a drastic simplification to the fidelity tensor network and hence its optimisation. Since the sections to the left and right of the sites are described by the scalar product matrix, we may replace them with the identity matrix, as shown in fig. A.5i. Here we have also included the denominator of eqn. A.3. With this simplification, one can show that the optimal update is given by a simple local decomposition and truncation. Firstly, we take the tensors and the gate surrounded by the red box in fig. A.5i. Then, we contract these together, forming a tensor T , and split them by singular value decomposition (see fig. A.5ii). Next, we truncate the inner bonds with the projectors Pχ , by only retaining the subspace pertaining to the χ largest singular values of Q. It is possible to show by invoking the properties of the SVD that the new truncated tensors Γ!C , λ!C and Γ!D , as shown in A.5iii, maximise the fidelity in eqn. A.3. Lemma III: If the set of gates operating on an infinite MPS are near-unitary, 146 Infinite MPS Methods for Computing the Environment Figure A.4: Overview of Lemma 2, part 2. i) The state g [C,D] |ϕ# formed by the application of a single two-body gate to the state |ϕ#. ii) The proposed new state |ϕ! #. iii) The unnormalised fidelity $ϕ! | g [C,D] |ϕ# as a tensor network. then the local update is a good approximation to the optimal update. We want to describe the update of an infinite-MPS that is in the canonical form under the action of a repeated set of gates. An example of this is the transfer matrix TA acting ! " on the state !ϕ[0] in fig. A.1. However, for now we make the restriction that the gates are near unitary with respect to their input and output legs. The task is to find a good approximation to the evolved state |ϕ! # = TA |ϕ# (see fig. A.6i), by maximising the fidelity (ϕ" |TA |ϕ) . Since our |ϕ# is translationally invariant and the gates are identical, we assume (ϕ" |ϕ" ) ! that |ϕ # is also translationally invariant, and is described by the tensors Γ!C , λ!C , Γ!D and λ!C . To do this, we modify the problem in a subtle way. We focus on the update of one of the repeated sections of |ϕ#, and leave the rest of the MPS acted on by the remaining gates. This state, which we call |ϕ!! #, is shown in fig. A.7i. We want to find the tensors "" |g|ϕ) Γ!!C , λ!!C , Γ!!D and λ!!C that maximise the fidelity (ϕ . This quantity is shown in fig. A.7ii. (ϕ"" |ϕ"" ) Recall that: A.1 Problem Overview 147 Figure A.5: Overview of Lemma 2, part 3. i) The expression of the normalized fidelity of eqn. A.3 as a tensor network. ii) The local decomposition operation iii) The determination of the new MPS tensors. Here, the operators Pχ truncate the bond, retaining the subspace corresponding the the χ largest values of ST . 148 Infinite MPS Methods for Computing the Environment Figure A.6: Overview of Lemma 3, part 1. i) The application of a row of two-body gates to the state |ϕ#. ii) The MPS, |ϕ! #, approximating the updated state TA |ϕ#. 1. The gates g are near-unitary. 2. The MPS representing |ϕ# is in the canonical form. The first property means that the contraction of the gate with its conjugate transpose in fig. A.7ii may be approximated by the identity (see fig. A.7iii). As the state is in the canonical form, this means the scalar product matrices to the left and right of the orange box in fig. A.7ii are approximately the identity matrix. We then invoke the reasoning of Lemma 2 to see that Γ!!C , λ!!C and Γ!!D may be determined by the same local decomposition. Next, we update the entire MPS by making the substitutions ΓC → Γ!!C , λC → λ!!C and ΓD → Γ!!D . The conclusion is that for states in the canonical form, we may approximate the action of a set of repeated near-unitary gates by carrying out a local decomposition. This is the backbone of the TEBD algorithm [Vid04, Vid07]. A remarkable aspect of the TEBD algorithm is that if the MPS starts in the canonical form, then state formed by the iterated action of transfer matrices TA and TB (see fig. A.1) containing near-unitary gates seems to stay acceptably close to the canonical form for the algorithm to be effective. This is related to the property g † g ≈ I. Designing an initial MPS in the canonical form that has some overlap with the eventual dominant eigenvector is usually quite easy (take, for instance, the product state, (|0# + |1#)⊗N , where the computational basis [|0#,|1#)] represents some local physical degree of freedom). If the gates were not near-unitary, the effectiveness of the local update could not be guaranteed. It is possible that the finite-χ MPS representation of the evolved state would A.1 Problem Overview 149 Figure A.7: Overview of Lemma 3, part 2. i) The MPS form of the proposed updated state, |ϕ!! #. ii) The unnormalised overlap between |ϕ!! # and TA |ϕ#. iii) If the gate is nearunitary, then the contraction of the gate with its conjugate transpose is approximately the identity operator. only be acceptably close to the actual evolved state for very large χ. Furthermore, since for a general gate g † g )= I, the evolution will cause the MPS to drift away from the canonical form faster than in TEBD. This means that the local update is not only suboptimal for the first iteration of the algorithm, but its performance becomes increasingly unpredictable as the evolution progresses. It is clear that for the contraction of general infinite, translationally invariant 2D tensor networks, where the gate is not near-unitary, we need to think deeper about how to update the infinite MPS boundary state. Lemma IV: The physical state represented by a tensor network states is invariant under the insertion of full-rank operator-inverse pairs on bonds. We show a justification for this simple notion in fig. A.8. The diagram shows the infinite MPS representation of some state |ϕ#. The coefficients of the state expanded in a local 150 Infinite MPS Methods for Computing the Environment Figure A.8: Overview of Lemma 4. The state |ϕ# is invariant under an operation on a tensor bond that resolves the identity. physical basis are returned by the contraction of the bonds of the infinite MPS. Since the insertion of some operator pair T -T −1 on the bond does not change the object returned by this contraction, the state is invariant under this insertion. Lemma V: Infinite, translationally invariant MPS states can be transformed into a canonical form if the scalar product matrix can be computed. Lemma II stated that the scalar product matrices of partitions of the MPS could tell use whether the MPS was in the canonical form. It also stated that the orthonormality of the auxiliary bases were invariant under unitary operations of the cut MPS bond. Lemma III stated that if the MPS was in the canonical form, then the update of the MPS by a near-unitary set of gates could be well approximated by some local decomposition of the repeated sections. Lemma III also stated that the canonical form is approximately maintained under near-unitary evolution, but would likely drift from the canonical form under non-unitary evolution, making it difficult to iterate the evolution. However, we know from Lemma IV that the state is invariant under certain transformations along the MPS bonds. The question we want to answer here is - for an arbitrary translationally invariant infinite MPS, can we find a set of transformations that restore an MPS to the canonical form? Figure Figure A.9i shows an MPS representing some state |ϕ!! #. The MPS is composed of tensors Γ!!C , λ!!C , Γ!!D and λ!!D . The MPS is not in the canonical form, and ! by >virtue of this, ! lef t " ! the scalar product matrices of the left and right partitions !ψα and !ψβright (enclosed by red boxes in fig. A.9i) are not the identity matrix, but the positive semidefinite matrices A.2 Evolution of an Infinite MPS by Non-unitary Operators 151 L and R, shown in fig. A.9ii. Consider the following decomposition of L and R. L = UL DL UL† , R = UR DR UR† , (A.4) and the following assignments, −1/2 −1/2 UL† , TR = DR UR† , (A.5) ! ! ! > > > ! " ! ! ! Then it is clear that the modified states !ψ̂αlef t = TL !ψαlef t and !ψ̂βright = TR !ψβright give rise to identity scalar product matrices. TL = DL Using Lemma IV, we introduce TL , TR and their inverses into our MPS (see fig. A.10i), leaving the overall state invariant. Then, we take the three actions in fig. A.10ii: 1. Contracting TL−1 , λ!!C and TR−1 into a matrix Q, and splitting Q via SVD into UQ , SQ and VQ . 2. Contracting Γ!!C , TL and UQ to form Γ̂C 3. Contracting VQ , TL and Γ!!D to form Γ̂D )@ * Introducing Γ̂C , Γ̂D and λ̂C = SQ /tr SQ2 uniformly through the MPS (see fig. A.10iii, we have a state |ϕ̂# = |ϕ!! # (since all the introduced operations resolve the identity), that is in the canonical form when considered at the links between Γ̂C and Γ̂D . To see ! this, > ! consider that the left and right partitions in the red boxes in fig. A.10iii are UQ !ψ̂αlef t ! > ! and VQ !ψ̂βright respectively. Since applying unitary operators to an orthonormal basis preserve this orthonormality (see Lemma II), our states to the left and right of λ̂C are spanned by an orthonormal basis in this representation. To complete the transformation of the state into the canonical form at any partition, we need to apply the same procedure to the links between Γ̂D and Γ̂C . A.2 Evolution of an Infinite MPS by Non-unitary Operators In this section, we use the principles set down in Lemmas I-V to describe the evolution of an infinite MPS by repeating, non-unitary gates g. It is clear by now that there are a 152 Infinite MPS Methods for Computing the Environment Figure A.9: Overview of Lemma 5, part 1. i) The state |ϕ!! # is here expressed as an MPS. ii) Since the MPS is not in the canonical form, the scalar product matrices are not the identity, but the positive definite operators L and R. couple of essential points to understand. 1. The evolution of an MPS with respect to a single, local gate can be well-approximated by a local decomposition and truncation (Lemma III), as long as the partitions to the left and the right of the gate are described by an orthonormal basis. 2. Partitions of the MPS can be orthonormalised, leaving the overall state invariant, as long as we can compute the scalar product matrix of the partition efficiently. (Lemma V) In fig. A.11 we detail an approach building on these principles. Fig. A.11i depicts the state |ϕ!! # = TA |ϕ# formed by applying the gates g to the state |ϕ#. Then, we compute the scalar product of the left and right partitions of the state. In fig. A.11 we show the tensor contraction to compute the L, the scalar product of the left partition. Having done this, we orthonormalise the state (fig. A.11iii), such that the sections to the left and right of the orange box in fig. A.11iv are spanned by an orthonormal basis. Then, we can find the new tensors Γ!!C , λ!!C and Γ!!D by the local decomposition and truncation outlined in A.2 Evolution of an Infinite MPS by Non-unitary Operators 153 ! > ! " ! Figure A.10: Overview of Lemma 5, part 2. i) The partitions !ψ̂αlef t = TL !ψαlef t and ! ! > > ! ! right = TR !ψβright . The inverse matrices TL−1 and TR−1 have been inserted to resolve !ψ̂β the identity. ii) The contraction of TL−1 , λ!!C and TR−1 into the matrix Q, which is then split by SVD. The determination of Γ̂C and Γ̂C . iii) The new state |ϕ̂# is in the canonical form about the operator λ̂C . Lemma II. Making the replacements Γ̂C → Γ!!C , λ̂C → λ!!C and Γ̂D → Γ!!D . Our updated state contains the tensors Γ!!C , λ!!C , Γ!!D and λ̂D . The two dominant computational costs of this algorithm are: 1. The computational cost of determining the scalar product matrices L and R. This is done by evolving the random vectors φL and φR as in fig. A.12. 2. The computational cost of performing the local decomposition (SVD). 154 Infinite MPS Methods for Computing the Environment Figure A.11: The evolution of an MPS under non-unitary gates. i) The state TA |ϕ#. ii) The computation of the left scalar product matrix, L. The computation of the right scalar product matrix follows by analogy. iii) The transformations ΓC → Γ̂C , ΓD → Γ̂D and λD → λ̂D iv) The state is unchanged, but is orthonormalised such that the sections to the left and the right of the orange box can be approximated by the identity matrix. The local decomposition and truncation can then find the new state. A.3 MPS-based Contraction Schemes for PEPS 155 Figure A.12: The left and right scalar product matrices are commonly found by beginning with some random left and right vectors, φL and φR and converging them under the action of repeated transfer matrices. A.3 MPS-based Contraction Schemes for PEPS In this section we describe how the scheme outlined in the previous section can be used to compute the environment of the iPEPS for various lattice geometries, and compare the computational cost of these implementations. In this discussion, we assume that our states have been computed by the iPEPS algorithm of Chapter 5. This introduces the concept of a minimal covering PEPS representation of the lattice. This is the minimum number of distinct PEPS tensors that are needed to perform an iPEPS evolution, and is a function of both the Hamiltonian and the lattice geometry. For instance, consider a Hamiltonian with identical two-body terms describing a system on the square lattice. The physical ground state might be invariant under any lattice translation (e.g. a spin liquid), but since the iPEPS algorithm update breaks translational invariance (see Appendix C for more details), it turns out that the minimal covering for the PEPS is invariant under shifts by two lattice sites, giving the A/B pattern seen throughout this thesis. Since we don’t know if and by how much the ground state breaks translational invariance of the Hamiltonian, there is no guarantee that this representation represents the ground state well. Nevertheless, it is the minimal set of tensors with which we can represent a 156 Infinite MPS Methods for Computing the Environment Figure A.13: The gates ’a’ and ’b’, formed by contracting the PEPS tensors with their conjugate versions along the physical index. ground state for this Hamiltonian and lattice geometry in the iPEPS algorithm. For this chapter, we assume that our Hamiltonian contains identical terms, and proceed using the minimal covering for each of the geometries we consider. However, it is possible to scale the approach we describe to compute the environment for PEPS networks with any type of regular periodicity. A.3.1 The Square Lattice A Diagonal Scheme Consider the task of computing the environment, as depicted in fig. 5.4. Around our region of interest we have PEPS tensors contracted with their conjugate pair along the physical index. Making the substitutions A-A∗ → a and B − B ∗ → b as in fig. A.13, this structure becomes a 2D tensor network. We can then use an infinite MPS to describe the boundary state and express a and b as gates operating on this state (see fig. A.14i). Starting with random initial states (open boundary conditions) we iterate the procedure outlined in the previous sections, to obtain the evolved states: $ϕ̃L | = $ϕL | lim (TA TB )k , k→∞ |ϕ̃R # = lim (TA TB )k |ϕR # k→∞ (A.6) This leaves us with the structure shown in fig. A.14ii. Next, we evolve the finite states |vR # and $vL | in directions perpendicular to the infinite MPS states. This is a 1D problem that is easily solved on a classical computer. Having done this, we are left with the structure shown in fig. A.14iii. Before advancing, it is worth stating that although we depict the gates a and b in fig. A.14, it is usually worth keeping these structures in the uncontracted A-A∗ and B-B ∗ forms. A.3 MPS-based Contraction Schemes for PEPS 157 Figure A.14: The diagonal contraction scheme for the square lattice That is, it is beneficial on the grounds of computational cost to leave the d-dimensional physical index uncontracted. We favoured the gates a and b for reasons of schematic clarity and to fully make the connection with the discussion of section A.1. A Parallel Contraction Scheme An alternative scheme for contracting the environment is to use for the boundary state an MPS that traverses the lattice in a direction parallel to the PEPS bonds. This process is summarised in figure A.15. Here, we have chosen to compute the environment around a horizontal link between B and A, by starting with infinite horizontal MPSes (see fig. A.15i). The evolution of this MPS is a different procedure to that for the diagonal scheme, where the gate operations almost directly mapped to those in the discussion in section A.1. However, the algorithm for the parallel scheme follows the same general principles. Firstly, we find attach a row of gates to the state. Secondly, we find the left and right 158 Infinite MPS Methods for Computing the Environment Figure A.15: The parallel contraction scheme for the square lattice scalar product matrices (see A.16) about a gate operation. Then, we transform the state into the canonical form, and perform the gate operation and truncation. Once we have determined $ϕ̃L | and |ϕ̃R #, we converge the finite-dimensional left and right eigenvectors, $vL | and |vR # to complete the description of the environment. A.3.2 Beyond the Square Lattice - the Hexagonal, Kagome and Triangular Lattices In this section, we briefly introduce schemes for contracting 2D networks of other geometries. Like the square lattice, the hexagonal lattice can be covered with just two tensors A and B. In fact, the similarity extends further. Problems on the hexagonal lattice can be solved by applying the square iPEPS algorithm with a simple modification. Starting with the four-legged tensors A and B and removing every instance of one of the four unique A.3 MPS-based Contraction Schemes for PEPS 159 Figure A.16: The computation of the scalar product matrices in the parallel scheme. links (up, down, left or right) we are left with an infinite hexagonal lattice. However, we are curious about whether a dedicated algorithm for the hexagonal lattice captures correlations more efficiently and how its computational complexity compares with the solution for the square lattice. One possible contraction scheme for the hexagonal lattice is shown in fig. A.17. The stages of this algorithm are by now familiar - the evolution of some infinite MPSes, $ϕL | → $ϕ̃L | , |ϕR # → |ϕ̃R # under the action of infinite rows of gates, followed by the convergence of the left and right vectors $vL | → $ṽL | and |vR # → |ṽR #. The key operations, such as computing the scalar product matrices should follow by analogy with the square schemes. In fig. A.18 we depict a possible contraction scheme for the Kagome lattice. The minimal covering consists of three distinct tensors, A, B and C. In the dotted red box, we show the gate operations on the infinite MPS. Comparison with the square schemes in figs. A.14 and A.15 shows that the update to the infinite MPS is of one of two types. The first stage is maps exactly to the square parallel scheme with gates a and b, and the second stage maps directly to the square diagonal scheme with a single gate c. Following this, vectors $ṽL | and |ṽR # are converged by exactly the same algorithm used in the square parallel scheme. In fig. A.19, we show a scheme for contracting a PEPS for the triangular lattice. Each lattice site has six nearest neighbours, and this increased connectivity leads to a more complex algorithm. For example, even before considering the computational cost of basic 160 Infinite MPS Methods for Computing the Environment Figure A.17: A contraction scheme for the hexagonal lattice stages, it can be seen that the infinite boundary MPS contains 12 repeating tensors. A.3.3 Computational Cost Comparison To conclude, we compare the leading order computational cost of each of these contraction schemes (see Table A.3.3). The schemes we have implemented do not necessarily represent an exhaustive or optimal set of schemes, but the computational costs are interesting for a couple of reasons. Firstly, as determining the environment is the dominant computational task in most iPEPS simulations, it puts in perspective the computational resources required to perform iPEPS simulations and justifies the limits to which we could provide results (e.g. the exclusion of triangular lattice Ising model results in Chapter 11). A.3 MPS-based Contraction Schemes for PEPS Figure A.18: A contraction scheme for the Kagome lattice 161 162 Infinite MPS Methods for Computing the Environment Figure A.19: A contraction scheme for the triangular lattice Secondly, it gives a rough idea of the relationship between the connectivity of a PEPS and the computational complexity of its associated iPEPS implementation. The computational costs are based on the following assumptions. 1. Each link of the PEPS has the same bond dimension D and each infinite MPS has a bond dimension of χ. 2. The local dimension, d, is the same at every lattice site. 3. The convergence of the infinite MPS involves the Sϕ steps. The determination of the scalar product matrices (see fig. A.12 and A.16) within each step involves SLR iterations. 4. The convergence of the vectors |ṽR # and $vL | involves Sv iterations. A.3 MPS-based Contraction Schemes for PEPS Square Diagonal Square Horizontal Hexagonal Kagome Triangular Computation of L&R (Sϕ iterations) χ3 D 4 + χ2 D 6 d + SLR (χ3 D4 ) χ3 D 6 + SLR (χ2 D8 d) χ2 D6 d2 + + SLR (χ3 D4 ) χ3 D 6 + SLR (χ2 D8 d) SLR (χ3 D8 +χ2 D11 d) 163 Orthonormalization & local decomposition χ3 D 6 Computation of |ṽR # & $ṽL | (Sv iterations) χ3 D 4 + χ2 D 6 d χ3 D 6 χ3 D 4 + χ2 D 6 d χ3 D 6 χ3 D 4 + χ2 D 5 d χ3 D 6 χ3 D 4 + χ2 D 6 d χ3 D12 χ3 D 6 + χ2 D 9 d Table A.1: The leading order computational costs for the square, hexagonal, Kagome and Triangular lattice infinite-MPS based contraction schemes. The leading order costs are shown in table A.3.3. There are two comments. Firstly, comparing the square diagonal and horizontal computational costs, one can see that it is possible to have different computational cost scaling for the same lattice geometry. In practice, the difference in the cost of computing L and R may allow access to higher D and χ in the diagonal scheme than in the horizontal scheme. A separate issue is whether one scheme or the other is naturally better at representing the correlations in the square lattice. We have not investigated this extensively and any relation could be heavily model dependent, but what empirical evidence we have seen seems to indicate that the horizontal scheme performs very slightly better for the same D and χ. The second point to make is that there does not appear to be a simple relation between the coordination number of the lattice, z, and the computational complexity of contracting the PEPS representing the lattice. For instance, from our results there is very little difference between the cost of contracting a hexagonal PEPS (z = 3) and a square PEPS (z = 4) with the diagonal scheme. However there is a vast increase in computational complexity when contracting the triangular PEPS (z = 6). Together, these suggest that some of the ongoing challenges in this area are to find the optimal contraction scheme for each lattice geometry, and then to investigate whether additional approximations can be made to further reduce the computational cost. 164 Infinite MPS Methods for Computing the Environment Appendix B Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice B.1 Problem Overview As explained in Chapter 4, the corner transfer matrix (CTM) approach consists of socalled corner transfer matrices and edge tensors surrounding a unit cell. The idea of the corner transfer matrix renormalization group (CTMRG) algorithm is that the by operating on these tensors in certain ways, we can find instances of them that collectively approximate the environment. In this appendix, we describe several variations of an algorithm for finding such tensors. We restrict ourselves to a 2 × 2 unit-cell containing tensors TA , TB , TC and TD and their conjugate versions. Our CTM is described by the corner tensors Cm for m = 1...4 and the edge tensors En for n = 1...8. This structure is shown in fig. B.1. Each of the CTM tensors is a reduced-rank approximation to a particular section of the environment. The bonds between CTM tensors, each of dimension χ, allow for correlations to flow between these sections. A higher χ captures more correlations and hence gives a better approximation to the actual environment. The numerical task solved by Nishino [NO96] was to determine an efficient (polynomial time) means of evolving to such an environment, keeping the rank bounded at each step. In this appendix, we discuss two of our own implementations, describing our reasoning behind them and the computational cost of each implementation. 165 166 Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice Figure B.1: The basic CTM structure for a PEPS with 2 × 2 periodicity B.2 The Corner Transfer Matrix Renormalization Group for iPEPS In the CTMRG algorithms we will describe, each step consists of i) insertion, ii) absorption and iii) renormalization. The insertion process involves replicating part of basic CTM unit cell along side the existing unit cell, creating an excess of tensors in the unit-cell. The tensors are inserted in such a way that they do not disrupt the regular pattern of the lattice. For example, we may make any of the insertions shown in fig. B.2. This can be interpreted as i) placing a block to the left, ii) placing a block to the right or iii) parting the CTM down the middle and inserting a 2 × 2 block. The interpretation is not particularly important, but may preempt the way in which we absorb the CTM tensors. The absorption process contracts some of the excess unit-cell tensors along their physical dimension and then into the environment tensors (see figure B.3. Here, we make one step to the left, and one step to the right, creating the tensors C̃1 → C̃4 and Ẽ1 , Ẽ2 , Ẽ5 and Ẽ6 . Each absorption step increases some bond dimensions between the environment tensors, in this case from χ to χD2 . To keep the CTM representation compact, we must find renormalization tensors Q and W that reduce the dimension of the bonds between the environment tensors, whilst keeping as B.2 The Corner Transfer Matrix Renormalization Group for iPEPS 167 Figure B.2: Insertion of a 2 × 2 block of sites i) to the left of the existing unit-cell ii) to the right of the existing unit-cell iii) in the middle of the existing unit-cell much information about the environment as possible (see figure B.4). On computational cost grounds we prefer to absorb and renormalise alternately rather than make multiple absorptions and then a more severe renormalization. For each of the interpretations of the insertions in figure B.2, we can choose a different absorption scheme. For i), we may make two absorption and renormalization steps to the right, so that our unit cell is again A-B-C-D. Analogously, we can for ii) make two steps to the left. For iii), we take the left hand column and move it to the left, and the right hand column and move it to the right. Our unit cell is now B-A-D-C. A complete cycle of absorptions and renormalizations has been completed when we have made an equal number of steps in each direction and we have returned to the A-B-C-D unit-cell structure. For i) this means making, for example, two steps right, two steps down, two steps left and finally two steps up. For ii) it is similarly simple. It is easily 168 Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice Figure B.3: The absorption process in a horizontal direction, increasing the dimension of the bonds. seen that for iii) we can achieve a full cycle by alternating horizontal and vertical steps. For an infinite system, this process of insertion, absorption and renormalization continues until there is convergence of the environment, as may be determined, for example, by examining the eigenvalues of the corner matrices. The idea is that the Q and W tensors effectively keep the most important correlations in the environment and that they resolve the identity in some reduced subspace. The absorption and renormalization steps in CTMRG are similar to the coarse-graining procedure of DM RG, and so this process is also called coarse-graining. B.3 Coarse-Graining Approaches The most physically motivated choice of Q and W would be that for which the renormalized density matrix ρR (shown in figure B.4) is nearest in some sense to the density matrix after absorption, ρA (shown on the RHS of figure B.3). That is, we directly apply the principles of DMRG to the CTM. A variational solution to this problem seems computationally expensive for a number of reasons. Firstly, it would add another layer of iteration, and secondly it scales exponentially in the unit-cell size. The first example of an efficient CTMRG algorithm for computing the environment of PEPS/TPS states was developed by Nishino and Okunishi [NO96]. Here, the authors B.4 The Directional CTMRG Approach 169 Figure B.4: Renormalization of the vertical bonds by the renormalization operators Q and W . describe a technique where at each point a unit-cell tensor is contracted diagonally into the corner transfer matrix. The renormalization tensors are determined by firstly ’cutting’ the index that needs to be renormalized, secondly contracting the rest of the network leaving the cut indices open, and thirdly decomposing this ’density matrix’ to obtain the unitary renormalization operators. For the iPEPS algorithm, it was unclear whether this algorithm could be directly applied. Firstly, the procedure of [NO96] was developed for a isotropic, translationally invariant system. Secondly, the act of contracting the entire network other than a ’cut’ link seemed expensive and difficult to scale to larger unit-cells. Thirdly, there was concern that choosing the renormalization operators based on the present environment might cause the CTM to very quickly converge to some local stationary state. B.4 The Directional CTMRG Approach Motivated by the algorithm in [NO96], Orus and Vidal described a directional variant of this algorithm [OV09]. A slightly modified version of this is shown in fig. B.5. This 170 Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice diagram details the stages to renormalize the tensors C1 , C4 , E2 and E1 after a step to the left. [0] [0] The steps of this algorithm are as follows. Starting with the corner tensors C1 and C4 , [0] [0] in fig. B.5i, we form the tensors C̃1 and C̃4 by moving the edge tensors E3 and E8 to the left. Next, we assert that as the links between C1 and E2 , and E1 and C4 correspond to the accumulation of the same correlations, the same renormalization tensors should be applied. To determine these* tensors, we firstly mix the subspaces by computing the ) ∗ [0] [0]† [0] [0]† sum T = C̃1 C̃1 + C̃4 C̃4 . The conjugation of the second term is an important modification to the algorithm in [OV09], and helps to stabilise the algorithm. From this sum, the renormalization tensor Q is determined by singular value decomposition. [0] [0] [0] [0] In step iv), we form the matrix R1 by contracting C1 , E2 ,E3 and ta . Likewise, we form [0] the matrix R4 from the bottom four tensors. In step v), we determine the renormalization tensor W by the same reasoning as step iii). Lastly, we apply the renormalization tensors [1] [1] to obtain the new tensors E2 and E1 . The beauty of this algorithm is its simplicity. In practice it is very fast, as it involves only one full-rank SVD per step (in the determination of W and W † ). Unfortunately, it was observed in some cases that the eigenvalues of the corner matrices did not converge but oscillated, and as a result the physical quantities of the PEPS also oscillated. B.5 An Improved Directional Algorithm A concern with the above approach is the way in which the two subspaces are combined. Deriving the renormalization operators from the matrix sum C1 C1† + (C4 C4† )∗ seems an ad-hoc choice. An approach more motivated from experience with tensor networks is to treat the four tensors of the CTM as a finite MPS. The steps of this algorithm are shown in fig. B.6. It begins in the same way as the [0] [0] [0] [0] previous algorithm, absorbing E3 into C1 and E8 into C4 . In step ii), we assume the legs marked black correspond to local Hilbert space. We assume the indices of these legs run over a set of vectors that together form an orthonormal basis. Then, we carry out the orthonomalization of the finite MPS, determining the operators Q1 and Q4 that when [0] [0] [0] [0] acting on C̃1 and C̃4 return tensors C̄1 and C̄4 . The ’scalar product’ matrix of these tensors, defined in the same way as for a partition of an infinite MPS (see Appendix A) is B.5 An Improved Directional Algorithm Figure B.5: The stages of the first directional CTMRG algorithm. 171 172 Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice Figure B.6: The stages of a more stable, but computationally more expensive directional CTMRG algorithm. B.6 Recent Developments 173 the identity matrix. In step iii), we mix the subspaces by contracting the inverse matrices, −1 Q−1 1 and Q4 , and determine the singular value decomposition. We then truncate to retain [1] [1] only the χ largest singular values, and contract to form the new tensors C̃1 and C̃4 . [0] Following this, we perform the same operations on the upper and lower matrices R1 and [0] [1] [1] R4 , which are defined in fig. B.5, to find the updated tensors Ẽ2 and Ẽ1 . This algorithm is more stable than the first directional scheme. It is also more effective. In computing the magnetization of the 2D classical Ising model, this method performed slightly better than the earlier version. However, each step involves three as many SVD operations, and as a result the overall cost is roughly two to three times higher. B.6 Recent Developments In the appendix of [CJV10] we describe a CTMRG algorithm that extends Nishino’s original algorithm to an anisotropic lattice with an A-B-C-D unit cell. It has been observed that this scheme shows particular promise. It is extremely stable and converges to solutions quicker than either of the methods described above. For a study of fermionic 2D lattice systems where the PEPS were converged via the simplified update, this CTMRG scheme produced lowest energy results than other proposed schemes. However, a detailed analysis of all three CTMRG schemes against a system with an exact solution, or a complete simulation of each method using the full environment to update the PEPS has not been carried out. 174 Corner Transfer Matrix Renormalization Group Algorithms for the Square Lattice Appendix C Update Schemes for PEPS tensors This appendix describes techniques developed to update the PEPS tensors during the iPEPS algorithm. Our discussion is based on an infinite square PEPS covered by tensors A and B, with bond dimensions D. We also assume that the Hamiltonian contains nearest neighbour terms, and by virtue of this that the Suzuki-Trotter decomposition of the imaginary-time evolution operator yields two-body gates. So whilst in this appendix we only deal with the update of a pair of tensors sharing a single link, the approaches developed here can be extended to models with more complex interactions. C.1 Problem Overview Following the reasoning of section 5.3, the iPEPS algorithm proceeds by applying a single gate g to a particular link of the PEPS. We call this state |ψg # ≡ g |ψ#. Then, we want to find the new state with modified tensors A! and B ! called |ψA" B " #, such that we minimise the square distance: dse = ||ψA" B " # − |ψg #|2 ≡ min ($ψg | ψg # − $ψA" B " | ψg # − $ψg | ψA" B " # + $ψA" B " | ψA" B " #) " " A ,B (C.1) This quantity is depicted as a tensor network in fig. C.1. Here, we use the approximate six-tensor environment (see fig. 5.4), which can computed by either the infinite MPS approach (Appendix A) or the corner transfer matrix (Appendix B). A single update amounts to finding the tensors A! and B ! that minimise dse . The problem as stated is an unconstrained quadratic optimization and is easy to solve. The tensors A! and B ! are determined by contracting A, B and the gate and splitting the 175 176 Update Schemes for PEPS tensors Figure C.1: The expression of the distance metric in terms of tensor contractions resulting tensor. However, the dimension of the interjoining bond may then be as high as Dd2 . Continuing with such an approach, the dimensions of the PEPS bonds may continue to grow exponentially over subsequent iterations, causing the computational cost of basic operations to inflate and rendering the PEPS a computationally inefficient representation of the state. As a result, a basic requirement of the update is that it must find the tensors A! and B ! that minimise the square error, subject to the restriction that the dimension of any PEPS bond is limited by D. This is the same problem that was encountered with 1D systems and the MPS (see Appendix A for a discussion). In the iTEBD algorithm, it was seen that a very good approximation to the optimal updated MPS tensors could be determined by simply truncating the tensors returned by the local split operation. However, the basis for this algorithm lay in the fact that the MPS can be expressed in a canonical form. In 2D, there exists no known canonical form. A PEPS cannot be partitioned into two subsystems by cutting a single bond, and the 2D analogy to the bipartite Schmidt representation about a link seems elusive. However, since enforcing a restriction on the bond dimension does not change the convex nature of the optimisation surface, there are many established numerical algorithms that can be employed to minimise dse and determine A! and B ! . In the next sections, we describe two such algorithms. C.2 A Variational Algorithm 177 Figure C.2: The tensor network representation of equation C.2. C.2 A Variational Algorithm Here, we consider the A! and B ! tensors as separate subspaces within to minimise the cost function. The process is to alternate between the subspaces, locally minimising the cost function at each step, in the hope of approaching the global minimum. In essence, this means that for each iteration and each subsystem, we are solving an unconstrained quadratic optimization problem. In order to do this, we must firstly define how to compute the gradient of dse with respect to A! or B ! . For a small deviation, δA! and ignoring second and higher-order terms, the change in the cost function δdse,A may be represented as a tensor network as in figure C.2. Making the substitutions in figure C.3 and expressing the tensor A! as a vector a! , the change in the cost function may be written: δdse,A = −δa† NA − NA† δa + δa† MA a + a† MA δa (C.2) It is easily seen that solving the linear equation MA a! = NA gives the δdse,A = 0 solution. Since MA is positive defined, it is possible to show that the second-order derivative is always positive, and hence the zero-gradient solution corresponds to a global minima within the space of a! vectors. We then reshape a! into a tensor A! and make the replacement A → A! . After updating A, we find the analogous tensors NB and MB and solve for the vector b! and the tensor B ! accordingly before updating B. The solution of MA a! = NA is obtained by inverting the matrix MA , at a cost of D12 . 178 Update Schemes for PEPS tensors Figure C.3: The matrices NA and MA , which are required to calculate the derivative of the distance function with respect to the elements of A. A More efficient implementation A more efficient implementation of this approach can be achieved by updating the minimal subspace of the tensors A! and B ! affected by the link involved in the update. To do this, we split the tensors A! and B ! according to figure C.4. Then, we only need to update the tensors X and Y . Assuming that d ≤ D2 , X and Y contain D2 d2 coefficients and MX and MY are now a square matrices with D2 d rows. The cost of inverting the matrices MX and MY is D6 d3 . A good initial starting point for the tensors X ! and Y ! may be obtained by contracting the initial X and Y with the gate, appropriately splitting and then truncating the new shared index to D. C.3 Conjugate Gradient Algorithm The well-known conjugate gradient (CG) algorithm offers an alternative means of finding the PEPS tensors that minimise the distance cost function. It is similar to a steepest descent algorithm, in that it uses gradient information to travel towards the minimum. In this way we can think of CG optimisation as a directed walk in the space of bounded dimension PEPS tensors, whereas the variational approach is a series of hops in this space. C.3 Conjugate Gradient Algorithm 179 Figure C.4: The splitting of A and B into W, X, Y and Z. Our new optimization domain is the elements of X and Y, reducing the number of coefficients in our search from 2D4 d to 2D2 d2 . The conjugate gradient algorithm aims to solve an optimization where the objective takes a quadratic form, 1 f (v) = v † · T · v − r · v, 2 (C.3) and we wish to solve for the v that minimises f(v). If we know T and r, the problem is solved trivially. But assume that we do not know T and r. All that we know is that the problem is approximately quadratic (and convex), and how to compute f (v) and the gradient ∇f . The PEPS update problem can be cast in this form. Consider if we vectorize the components of A! and B ! into vectors a! and b! that are then concatenated to form v. We know that the problem we are solving is quadratic and convex, and we can easily evaluate the function f (v) = dse . The missing piece of the puzzle is being able to compute the derivative of dse with respect to the individual elements of the vector v. 180 Update Schemes for PEPS tensors Since the vector v is a concatenation of a! and b! , we can construct the vector ∇dse from the concatenation of the vectors ∇dse,A and ∇dse,B . We have already seen that for a small change, δa! in the tensor A! , the cost function changes as δdse,A = −δa!† NA" − NA† " δa! + δa!† MA" a! + a!† MA" δa! (C.4) Consider that δa! = δx + jδy. In the purely real case, δdse,A = −δxNA − NA† δx + δxMA a! + a!† MA δx = 24(δx(MA a! − NA )) (C.5) In the purely imaginary case, δa! = jδy, δdse,A = jδyNA − jNA† δy − jδyMA a! + ja!† MA δy = 25(δy(MA a! − NA )) (C.6) Realising that the gradient is the direction of greatest increase, it is easy to see that the gradient for the elements of A! is then given by, ∇dse,A = 4(MA a! − NA ) + j5(MA a! − NA ) (C.7) This now defines the way in which we should modify the coefficients of A! , in order to move in the direction of the greatest increase in dse,A . The direction g0,A = −∇dse,A is then the direction of maximal decrease. Similarly, ∇dse,B = 4(MB b! − NB ) + j5(MB b! − NB ) (C.8) Concatenating ∇dse,A and ∇dse,B gives the gradient ∇dse and the direction of greatest decrease g0 = −∇dse The CG algorithm, like steepest descent, works an an iterative sense. The key difference between the CG algorithm and steepest descent methods is what happens after the first iteration. Labeling our choice of direction in the nth step as hn , in the first iteration, we choose the direction of steepest descent h0 = g0 (C.9) In following iterations, we take the direction as hn = gn + γn hn−1 (C.10) C.4 Comparison of the Methods 181 where in the standard Fletcher-Reeves form, gn · g n , gn−1 · gn−1 (C.11) gn · (gn − gn−1 ) , gn−1 · gn−1 (C.12) gn · (gn − gn−1 ) , gn−1 · (gn − gn−1 ) (C.13) γn = in Polak-Ribière form, γn = and in Hestenes-Stiefel form, γn = As explained in [PTVF92], CG techniques avoid the problem of the steepest descent method getting stuck in narrow valleys by using a combination of the current gradient information and past gradient information. Theoretically, for an exact quadratic minimization with N 2 free parameters, the conjugate gradient scheme finds the minima in N line minimizations. Even when the quadratic form is only approximate, a significant speedup over generic steepest descent methods is observed. Line Minimization Once the gradient gn and direction of optimization hn have been calculated, we undertake a line minimization by stepping along the line in the direction hn looking for a turning point. This is an iterative process itself. The technique we use to do this borrows heavily from [PTVF92] and [Mac04]. At each step along the line, q, we need to calculate the instantaneous gradient ∇dse,n,q , and the scalar product, lq = −∇dse,n,q · hn . We call this the line gradient. Whilst it is difficult to exactly determine the turning point, we can approximate its position by detecting two close points with differing sign of the gradient and interpolating between them. C.4 C.4.1 Comparison of the Methods Stability and Effectiveness The choice of which method to use is a trade-off between stability and effectiveness. In general, the variational update performs better (i.e. returns a slightly lower energy PEPS approximation) where it is stable. But our empirical observation is that its stability is only ensured for D = 2. 182 Update Schemes for PEPS tensors We suggest that the variational solution has three drawbacks. • The updates of tensors A! and B ! are performed separately. As a result, it is possible we will preferentially optimise in one subspace and end up at a local minima. • The inversion of the matrix M introduces spurious correlations in the system. • It is more susceptible to the so-called positive definite problem The first concern is greatly mitigated by finding good initial starting points, which as explained above is simple in the case of a two-body nearest-neighbour gate. However, when we have more exotic update terms and more tensors to update - such as the fourbody plaquette terms considered in Chapter 10 - this can be a problem. The second and third problems are of major concern and interrelated. To understand their origin and implications we must consider the way in which the environment is an approximation to the actual environment. As a simple model, consider that we are updating A and our representation of MX is diagonal. Also, consider that the environment is contracted with finite χ, and that this approximation introduces an error. In our simple model, this error is represented as the nth eigenvalue λn of MX being offset a small amount (n from the exact (infinite χ) value. When we invert MX , this small error can become very significant. Given an eigenvalue of MX , λn = λn,exact + (n , under inversion this becomes −1 εn λ−1 . For large λn,exact the effect of εn will be negligible. For small n = λn,exact − λ2 n,exact −1 λn,exact , however, the displacement of λ−1 n from λn,exact can be very large. This does not effect our ability to minimise dse , however it can mean that the tensors we find are far from those which would be obtained if the environment were computed exactly. We may find large amounts of numerical noise in the tensors, and when the replacement of the new tensors globally is enforced, this means that we may introduce spurious correlations to the system. This makes it progressively harder to converge an environment with finite χ. Consider now that one of the eigenvalues of MX is a small negative value. An exact representation of MX is positive-semidefinite, but the finite-χ approximation can destroy this symmetry. The effect of this is that the convexity of the problem is destroyed. There exists a direction along which the cost function decreases uniformly, where in the infinite-χ representation, no such direction exists. During inversion, this small negative component C.4 Comparison of the Methods 183 becomes a large negative component and greatly changes the coefficients of the new tensor. This is known as the positive definite problem. The problems in the above discussion are problems related to the approximate nature of the environment, and simply surface during the variational update. In light of this, it is imperative to consider the relative influence of these problems in the CG update. Firstly, there is no inversion in the CG update. The environment is used to compute the gradient, but the gradient is linear in the environment and so small errors to small components do not have a great influence on determining the new tensors. Secondly, the CG update is in essence a walk rather than a series of jumps. So even though there may exist a direction in which the cost function always decreases, since this is most likely associated with a small negative eigenvalue, the slope in this direction will be very slight. It is likely that the CG algorithm will preference directions in which the descent is more severe and for which the line minimization encounters a turning point. Even if it does choose to move in this uniformly decreasing direction, since there are only a finite number of steps in any stage of the algorithm it is likely that the resulting state will not be displaced too far. In our iPEPS simulations, we have only found the variational procedure stable for D = 2 PEPS with nearest neighbour updates. For higher D, the finite-χ effects become too severe, and the CG algorithm must be used. Also, for the four-site update used in Chapter 10, it was seen that the variational update too heavily preferences the first tensor of the update, causing the algorithm to find a solution that did not optimise the cost function well in a finite number of steps. It should also be noted that by-in-large, when both approaches work well, the observed physical properties agree, but the characteristics of the tensor coefficients may not. This means that, as an example, a D = 2 PEPS converged with the variational update is not necessarily a good starting point for a D = 3 simulation with the CG algorithm. C.4.2 Computational Cost of the Link Update Schemes Computational Cost of the Variational Update The cost of the variational update depends on two main computations: 1. The cost of computing MX and NX (and MY and NY ). 2. The inversion of the MX (and MY ) operator. 184 Update Schemes for PEPS tensors We assume that there are a total of SV iterations of the variational update. For the first component, we incur an out-of-loop cost scaling as χ2 D6 d2 + χ3 D4 d2 + χ2 D4 d4 , and a per-iteration cost scaling as D5 d5 . The inversion in the most aggressive implementation scales as D6 d3 . Thus, the total cost scales as χ2 D6 d2 +χ3 D4 d2 +χ2 D4 d4 +SV (D5 d5 +D6 d3 ) For the CG update, we assume SCG iterations of the update, each containing a line minimization of SLM steps. In practice, only the maximum number of line minimization steps is specified, as it is possible for the turning point to be detected quickly. We need to once again calculate MX , NX , MY and NY to compute the gradient at a given point. Thus, the CG algorithm has an out-of-loop cost scaling as χ2 D6 d2 + χ3 D4 d2 + χ2 D4 d4 and an in-loop cost scaling as SLM D5 d5 . The total cost scales as χ2 D6 d2 + χ3 D4 d2 + χ2 D4 d4 + SCG SLM D5 d5 . As mentioned earlier, SCG itself scales with the dimension of the vector space in which the solution vector, v, exists. Thus, for the update of a single link, SCG ∝ D2 d2 . References [ABV91] N. A. Alves, B. A. Berg and R. Villanova, Potts models: Density of states and mass gap from Monte Carlo calculations, Phys. Rev. B 43(7), 5846– 5856 (Mar 1991). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 [ADR82] A. Aspect, J. Dalibard and G. Roger, Experimental Test of Bell’s Inequalities Using Time- Varying Analyzers, Phys. Rev. Lett. 49(25), 1804–1807 (Dec 1982). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 [AGR82] A. Aspect, P. Grangier and G. Roger, Experimental Realization of EinsteinPodolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell’s Inequalities, Phys. Rev. Lett. 49(2), 91–94 (Jul 1982). . . . . . . . . . . . . . . . . . . . 2 [AKLT88] I. Affleck, T. Kennedy, E. H. Lieb and H. Tasaki, Valence Bond Ground States in Istoropic Quantum Antiferromagnets, Commun. Math. Phys. 115, 477–528 (1988). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 [And72] P. W. Anderson, More Is Different, Science 177(4047), pp. 393–396 (1972). 1 [And73] P. Anderson, Resonating valence bonds: A new kind of insulator?, Materials Research Bulletin 8(2), 153 – 160 (1973). . . . . . . . . . . . . . . . . . . . . . . . 108 [And87] P. W. Anderson, The Resonating Valence Bond State in La2 CuO4 and Superconductivity, Science 235(4793), pp. 1196–1198 (1987). . . . . . . . .108 [AT43] J. Ashkin and E. Teller, Statistics of Two-Dimensional Lattices with Four Components, Phys. Rev. 64(5-6), 178–184 (Sep 1943). . . . . . . . . . . . . . . . .88 [Bax73] R. J. Baxter, Potts model at the critical temperature, Journal of Physics C: Solid State Physics 6(23), L445 (1973). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 185 186 References [Bax82] R. J. Baxter, Exactly Solved Models in Statistical Mechanics, Academic Press, 1 edition, 1982. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 96 [BB07] A. Bazavov and B. A. Berg, Deconfining phase transition on lattices with boundaries at low temperature, Phys. Rev. D 76(1), 014502 (Jul 2007). 88 [BBD08] A. Bazavov, B. A. Berg and S. Dubey, Phase transition properties of 3D Potts models, Nuclear Physics B 802(3), 421 – 434 (2008). . . . . . . 88, 129 [BBM+ 02] K. Bernardet, G. G. Batrouni, J.-L. Meunier, G. Schmid, M. Troyer and A. Dorneich, Analytical and numerical study of hardcore bosons in two dimensions, Phys. Rev. B 65(10), 104519 (Feb 2002). . 73, 76, 77, 78, 84 [BD84] R. M. Bradley and S. Doniach, Quantum fluctuations in chains of Josephson junctions, Phys. Rev. B 30(3), 1138–1147 (Aug 1984). . . . . . . . . . . . . . . . . 73 [BD02] H. W. J. Blöte and Y. Deng, Cluster Monte Carlo simulation of the transverse Ising model, Phys. Rev. E 66(6), 066110 (Dec 2002). . . . . . . . .66, 70 [Bel64] J. Bell, On the Einstein-Podolsky-Rosen Paradox, Physics 1, 195–200 (1964). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 [CA80] D. M. Ceperley and B. J. Alder, Ground State of the Electron Gas by a Stochastic Method, Phys. Rev. Lett. 45(7), 566–569 (Aug 1980). . 2, 129 [CEPD06] M. Cramer, J. Eisert, M. B. Plenio and J. Dreißig, Entanglement-area law for general bosonic harmonic lattice systems, Phys. Rev. A 73(1), 012309 (Jan 2006). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 [CHSH69] J. F. Clauser, M. A. Horne, A. Shimony and R. A. Holt, Proposed Experiment to Test Local Hidden-Variable Theories, Phys. Rev. Lett. 23(15), 880–884 (Oct 1969). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 [CJV10] P. Corboz, J. Jordan and G. Vidal, Simulation of fermionic lattice models in two dimensions with projected entangled-pair states: Next-nearest neighbor Hamiltonians, Phys. Rev. B 82(24), 245119 (Dec 2010). . . . 113, 120, 173 [CKW00] V. Coffman, J. Kundu and W. K. Wootters, Distributed entanglement, Phys. Rev. A 61(5), 052306 (Apr 2000). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 References 187 [DGPS99] F. Dalfovo, S. Giorgini, L. P. Pitaevskii and S. Stringari, Theory of BoseEinstein condensation in trapped gases, Rev. Mod. Phys. 71(3), 463–512 (Apr 1999). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 [DM89] E. Dagotto and A. Moreo, Phase diagram of the frustrated spin-1/2 Heisenberg antiferromagnet in 2 dimensions, Phys. Rev. Lett. 63(19), 2148–2151 (Nov 1989). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 [EPR35] A. Einstein, B. Podolsky and N. Rosen, Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?, Phys. Rev. 47(10), 777–780 (May 1935). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 [Fey82] R. P. Feynman, Simulating physics with computers, International Journal of Theoretical Physics 21(67), 467488 (1982). . . . . . . . . . . . . . . . . . . . . . . . . . . 1 [FFGP07] R. Falcone, R. Fiore, M. Gravina and A. Papa, Universality and massive excitations in 3d 3-state Potts modl, (Oct 2007), arXiv:0710.4240v1 [heplat]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 [FKK+ 90] F. Figueirido, A. Karlhede, S. Kivelson, S. Sondhi, M. Rocek and D. S. Rokhsar, Exact diagonalization of finite frustrated spin-(1/2 Heisenberg models, Phys. Rev. B 41(7), 4619–4632 (Mar 1990). . . . . . . . . . . . . . . . . .110 [FNW92] M. Fannes, B. Nachtergaele and R. F. Werner, Finitely Correlated States on Quantum Spin Chains, Commun. Math. Phys. 144, 443–490 (1992). 3, 20 [FS78] E. Fradkin and L. Susskind, Order and disorder in gauge systems and magnets, Phys. Rev. D 17(10), 2637–2658 (May 1978). . . . . . . . . . . . . . . . 66 [FWGF89] M. P. A. Fisher, P. B. Weichman, G. Grinstein and D. S. Fisher, Boson localization and the superfluid-insulator transition, Phys. Rev. B 40(1), 546–570 (Jul 1989). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73, 74, 76 [GBM+ 01] M. Greiner, I. Bloch, O. Mandel, T. W. Hänsch and T. Esslinger, Exploring Phase Coherence in a 2D Lattice of Bose-Einstein Condensates, Phys. Rev. Lett. 87(16), 160405 (Oct 2001). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 [GK06] D. Gioev and I. Klich, Entanglement Entropy of Fermions in Any Dimension and the Widom Conjecture, Phys. Rev. Lett. 96(10), 100503 (Mar 2006). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 188 References [GME+ 02] M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch and I. Bloch, Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms, Nature 415, 39–44 (Jan 2002). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 [GMN03] A. Gendiar, N. Maeshima and T. Nishino, Stable Optimization of a Tensor Product Variational State, Progress of Theoretical Physics 110(4), 691–699 (2003). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 21, 70 [GSH89] M. P. Gelfand, R. R. P. Singh and D. A. Huse, Zero-temperature ordering in two-dimensional frustrated quantum Heisenberg antiferromagnets, Phys. Rev. B 40(16), 10801–10809 (Dec 1989). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 [Hen99] M. Henkel, Conformal Invariance and Critical Phenomena, Springer-Verlag, 1 edition, 1999. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 [HFM78] D. F. Hines, N. E. Frankel and D. J. Mitchell, Hard-disc Bose gas, Physics Letters A 68(1), 12 – 14 (1978). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 [HHO90] H.-X. He, C. J. Hamer and J. Oitmaa, High-temperature series expansions for the (2+1)-dimensional Ising model, J. Phs. A: Math. Gen. 23, 1775 (1990). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66, 68 [HS00] P. Henelius and A. W. Sandvik, Sign problem in Monte Carlo simulations of frustrated quantum spin systems, Phys. Rev. B 62(2), 1102–1113 (Jul 2000). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 [Isi25] E. Ising, Beitrag zur Theorie des Ferromagnetismus, Zeitschrift fr Physik A Hadrons and Nuclei 31, 253–258 (1925). . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 [JBC+ 98] D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner and P. Zoller, Cold Bosonic Atoms in Optical Lattices, Phys. Rev. Lett. 81(15), 3108–3111 (Oct 1998). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 [JHOG89] H. M. Jaeger, D. B. Haviland, B. G. Orr and A. M. Goldman, Onset of superconductivity in ultrathin granular metal films, Phys. Rev. B 40(1), 182–196 (Jul 1989). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 [JOV+ 08] J. Jordan, R. Orús, G. Vidal, F. Verstraete and J. I. Cirac, Classical Simulation of Infinite-Size Quantum Lattice Systems in Two Spatial Dimensions, Phys. Rev. Lett. 101(25), 250602 (Dec 2008). . . . . . . . . . . . . 73, 80, 85, 141 References 189 [JOV09] J. Jordan, R. Orús and G. Vidal, Numerical study of the hard-core BoseHubbard model on an infinite square lattice, Phys. Rev. B 79(17), 174515 (May 2009). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii [JPPU80] R. Jullien, K. A. Penson, P. Pfeuty and K. Uzelac, Renormalization-Group Study of a Two-Dimensional Frustratedlike Quantum Spin System, Phys. Rev. Lett. 44(23), 1551–1554 (Jun 1980). . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 [JWX08] H. C. Jiang, Z. Y. Weng and T. Xiang, Accurate Determination of Tensor Network State of Quantum Lattice Models in Two Dimensions, Phys. Rev. Lett. 101(9), 090603 (Aug 2008). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 [KF69] P. W. Kasteleyn and C. M. Fortuin, Phase Transitions in Lattice Systems with Random Local Properties, J. Phys. Soc. Jpn. (Suppl) 26(11) (1969). 88 [KOSW99] V. N. Kotov, J. Oitmaa, O. P. Sushkov and Z. Weihong, Low-energy singlet and triplet excitations in the spin-liquid phase of the two-dimensional J1 −J2 model, Phys. Rev. B 60(21), 14613–14616 (Dec 1999). . . . . . . . . . . . . . . .110 [KPSS80] J. B. Kogut, R. B. Pearson, J. Shigemitsu and D. K. Sinclair, ZN and N state Potts lattice gauge theories: Phase diagrams, first-order transitions, β functions, and N1 expansions, Phys. Rev. D 22(10), 2447–2464 (Nov 1980). 88 [KS81] J. B. Kogut and D. K. Sinclair, 1/Q expansions and the first order phase transition of the three-state Potts model in three dimensions, Physics Letters A 81(2-3), 149 – 152 (1981). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 [KT73] J. M. Kosterlitz and D. J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, Journal of Physics C: Solid State Physics 6(7), 1181 (1973). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 [KW78] H. Kunz and F. Y. Wu, Site percolation as a Potts model, Journal of Physics C: Solid State Physics 11(8), L357 (1978). . . . . . . . . . . . . . . . . . . . .88 [Lan37] L. D. Landau, Collected Papers of L. D. Landau, Phys. Z. Sowjetunion 11, 26 (1937). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46, 65 190 References [Lau83] R. B. Laughlin, Anomalous Quantum Hall Effect: An Incompressible Quantum Fluid with Fractionally Charged Excitations, Phys. Rev. Lett. 50(18), 1395–1398 (May 1983). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 [LLRV05] J. I. Latorre, C. A. Lütken, E. Rico and G. Vidal, Fine-grained entanglement loss along renormalization-group flows, Phys. Rev. A 71(3), 034301 (Mar 2005). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 [LLZ09] B. Li, S.-H. Li and H.-Q. Zhou, Quantum phase transitions in a twodimensional quantum XY X model: Ground-state fidelity and entanglement, Phys. Rev. E 79(6), 060101 (Jun 2009). . . . . . . . . . . . . . . . . . . . . . . . .82 [Mac04] D. J. C. Mackay, Macopt, http://www.inference.phy.cam.ac.uk/mackay/, 2004. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 [ML04] G. Misguich and C. Lhuillier, Two-Dimensional Quantum Antiferromagnets, in Frustrated Spin Systems, edited by H. T. Diep, chapter 5, pages 229–306, World Scientific, 2004. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 [MLPM06] M. Mambrini, A. Läuchli, D. Poilblanc and F. Mila, Plaquette valencebond crystal in the frustrated Heisenberg quantum antiferromagnet on the square lattice, Phys. Rev. B 74(14), 144422 (Oct 2006). . . . . . . . . . . . . . 112 [MVC07] V. Murg, F. Verstraete and J. I. Cirac, Variational study of hard-core bosons in a two-dimensional optical lattice using projected entangled pair states, Phys. Rev. A 75(3), 033605 (Mar 2007). . . . . . . . . . . . . . . . . . . . . . . 74 [MVC09] V. Murg, F. Verstraete and J. I. Cirac, Exploring frustrated spin systems using projected entangled pair states, Phys. Rev. B 79(19), 195119 (May 2009). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109, 111, 117, 121 [NC00] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 1 edition, October 2000. . . . . . 1 [NHO+ 01] T. Nishino, Y. Hieida, K. Okunishi, N. Maeshima, Y. Akutsu and A. Gendiar, Two-Dimensional Tensor Product Variational Formulation, Progress of Theoretical Physics 105(3), 409–417 (2001). . . . . . . . . . . . . . . . . . 4, 21, 70 [NO96] T. Nishino and K. Okunishi, Corner Transfer Matrix Renormalization Group Method, Journal of the Physical Society of Japan 65(4), 891–894 (1996). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 165, 168, 169 References 191 [NOH+ 00] T. Nishino, K. Okunishi, Y. Hieida, N. Maeshima and Y. Akutsu, Selfconsistent tensor product variational approximation for 3D classical models, Nuclear Physics B 575(3), 504 – 512 (2000). . . . . . . . . . . . . . . . 4, 21, 37, 70 [ODV09] R. Orús, A. C. Doherty and G. Vidal, First Order Phase Transition in the Anisotropic Quantum Orbital Compass Model, Phys. Rev. Lett. 102(7), 077203 (Feb 2009). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 [Ons44] L. Onsager, Crystal Statistics. I. A Two-Dimensional Model with an OrderDisorder Transition, Phys. Rev. 65(3-4), 117–149 (Feb 1944). . . . . . . . . . 58 [OR95] S. Östlund and S. Rommer, Thermodynamic Limit of Density Matrix Renormalization, Phys. Rev. Lett. 75(19), 3537–3540 (Nov 1995). . .3, 30 [OV06] T. J. Osborne and F. Verstraete, General Monogamy Inequality for Bipartite Qubit Entanglement, Phys. Rev. Lett. 96(22), 220503 (Jun 2006). 127 [OV08] R. Orús and G. Vidal, Infinite time-evolving block decimation algorithm beyond unitary evolution, Phys. Rev. B 78(15), 155117 (Oct 2008). iii, 32, 58, 141 [OV09] R. Orús and G. Vidal, Simulation of two-dimensional quantum systems on an infinite lattice revisited: Corner transfer matrix for tensor contraction, Phys. Rev. B 80(9), 094403 (Sep 2009). . . . . . . . . . . . . . . . . . . . . . . . . 169, 170 [OW09] R. Orus and T.-C. Wei, Visualizing elusive phase transitions with geometric entanglement, (Oct 2009), arXiv:0910.2488. . . . . . . . . . . . . . . . . . . . . . 126, 127 [OW10] R. Orus and T.-C. Wei, Geometric entanglement of one-dimensional systems: bounds and scalings in the thermodynamic limit, (Jun 2010), arXiv:1006.5584. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 [PEDC05] M. B. Plenio, J. Eisert, J. Dreißig and M. Cramer, Entropy, Entanglement, and Area: Analytical Results for Harmonic Lattice Systems, Phys. Rev. Lett. 94(6), 060503 (Feb 2005). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 [Pot52] R. B. Potts, Some Generalized Order-Disorder Transformations, Math. Proc. of the Camb. Phil. Soc. 48, 106–109 (Sep 1952). . . . . . . . . . . . . . . . . 87 192 References [PTVF92] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical recipes in C (2nd ed.): the art of scientific computing, Cambridge University Press, New York, NY, USA, 1992. . . . . . . . . . . . . . . . . . . . . . . . . . 181 [PYJM06] Z. Pei-Ying and W. Ji-Min, An Analytical Study of Three-State Potts Model on Lattice, Communications in Theoretical Physics 45(5), 877 (2006). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 [Sac92] S. Sachdev, Kagome and triangular-lattice Heisenberg antiferromagnets: Ordering from quantum fluctuations and quantum-disordered ground states with unconfined bosonic spinons, Phys. Rev. B 45(21), 12377–12396 (Jun 1992). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108 [Sac99] S. Sachdev, Quantum Phase Transitions, Cambridge University Press, 1 edition, 1999. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2, 6 [Sch71] M. Schick, Two-Dimensional System of Hard-Core Bosons, Phys. Rev. A 3(3), 1067–1073 (Mar 1971). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 [SMF09] D. N. Sheng, O. I. Motrunich and M. P. A. Fisher, Spin Bose-metal phase in a spin- 21 model with ring exchange on a two-leg triangular strip, Phys. Rev. B 79(20), 205112 (May 2009). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 [SN90] R. R. P. Singh and R. Narayanan, Dimer versus twist order in the J1 -J2 model, Phys. Rev. Lett. 65(8), 1072–1075 (Aug 1990). . . . . . . . . . . . . . . 110 [SOFZ10] Q.-Q. Shi, R. Ors, J. O. Fjrestad and H.-Q. Zhou, Finite-size geometric entanglement from tensor network algorithms, New Journal of Physics 12(2), 025008 (2010). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126, 127 [SOW01] O. P. Sushkov, J. Oitmaa and Z. Weihong, Quantum phase transitions in the two-dimensional J1 − J2 model, Phys. Rev. B 63(10), 104420 (Feb 2001). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119 [SP81] J. Sólyom and P. Pfeuty, Renormalization-group study of the Hamiltonian version of the Potts model, Phys. Rev. B 24(1), 218–229 (Jul 1981). . . 91 [SS81] B. S. Shastry and B. Sutherland, Exact Ground State of a Quantum Mechanical Antiferromagnet, Physica 108B, 1069–1070 (1981). . . . . . . . . .108 References 193 [SWHO99] R. R. P. Singh, Z. Weihong, C. J. Hamer and J. Oitmaa, Dimer order with striped correlations in the J1 − J2 Heisenberg model, Phys. Rev. B 60(10), 7278–7283 (Sep 1999). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110, 112, 116, 117 [SY82] B. Svetitsky and L. G. Yaffe, Critical behavior at finite-temperature confinement transitions, Nuclear Physics B 210, 423–447 (December 1982). 88 [SZ92] H. J. Schulz and T. A. L. Ziman, Finite-Size Scaling for the TwoDimensional Frustrated Quantum Heisenberg Antiferromagnet, EPL (Europhysics Letters) 18(4), 355 (1992). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 [TdOIL08] L. Tagliacozzo, T. R. de Oliveira, S. Iblisdir and J. I. Latorre, Scaling of entanglement support for matrix product states, Phys. Rev. B 78(2), 024410 (Jul 2008). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 [Ter04] B. M. Terhal, Is entanglement monogamous?, IBM J. Res. Dev. 48(1), 71–78 (2004). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 [VC04] F. Verstraete and J. I. Cirac, Renormalization algorithms for QuantumMany Body Systems in two and higher dimensions, (Jul 2004), arXiv:condmat/0407066v1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 20, 37, 38, 74 [VC06] F. Verstraete and J. I. Cirac, Matrix product states represent ground states faithfully, Phys. Rev. B 73(9), 094423 (Mar 2006). . . . . . . . . . . . . . . . . . . . 23 [Vid03] G. Vidal, Efficient Classical Simulation of Slightly Entangled Quantum Computations, Phys. Rev. Lett. 91(14), 147902 (Oct 2003). . 2, 3, 38, 39 [Vid04] G. Vidal, Efficient Simulation of One-Dimensional Quantum Many-Body Systems, Phys. Rev. Lett. 93(4), 040502 (Jul 2004). . . .3, 30, 32, 40, 148 [Vid07] G. Vidal, Classical Simulation of Infinite-Size Quantum Lattice Systems in One Spatial Dimension, Phys. Rev. Lett. 98(7), 070201 (Feb 2007). . 148 [VLRK03] G. Vidal, J. I. Latorre, E. Rico and A. Kitaev, Entanglement in Quantum Critical Phenomena, Phys. Rev. Lett. 90(22), 227902 (Jun 2003). . . 2, 19 [VPC04] F. Verstraete, D. Porras and J. I. Cirac, Density Matrix Renormalization Group and Periodic Boundary Conditions: A Quantum Information Perspective, Phys. Rev. Lett. 93(22), 227205 (Nov 2004). . . . . . . . . . . . . . . . . 30 194 References [VWPGC06] F. Verstraete, M. M. Wolf, D. Perez-Garcia and J. I. Cirac, Criticality, the Area Law, and the Computational Power of Projected Entangled Pair States, Phys. Rev. Lett. 96(22), 220601 (Jun 2006). . . . . . . . . . . 23, 38, 59 [Wan50] G. H. Wannier, Antiferromagnetism. The Triangular Ising Net, Phys. Rev. 79(2), 357–364 (Jul 1950). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 [WC07] S. R. White and A. L. Chernyshev, Neél Order in Square and Triangular Lattice Heisenberg Models, Phys. Rev. Lett. 99(12), 127004 (Sep 2007). 37 [WDM+ 05] T.-C. Wei, D. Das, S. Mukhopadyay, S. Vishveshwara and P. M. Goldbart, Global entanglement and quantum criticality in spin chains, Phys. Rev. A 71(6), 060305 (Jun 2005). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 126, 130, 134 [Whi92] S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69(19), 2863–2866 (Nov 1992). . . . . . . . . .2, 3, 30 [Wil75] K. G. Wilson, The renormalization group: Critical phenomena and the Kondo problem, Rev. Mod. Phys. 47(4), 773–840 (Oct 1975). . . . . . . . . . . 3 [Wol06] M. M. Wolf, Violation of the Entropic Area Law for Fermions, Phys. Rev. Lett. 96(1), 010404 (Jan 2006). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 [WS98] S. R. White and D. J. Scalapino, Density Matrix Renormalization Group Study of the Striped Phase in the 2D t − J Model, Phys. Rev. Lett. 80(6), 1272–1275 (Feb 1998). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37 [Wu78] F. Y. Wu, Percolation and the Potts model, Journal of Statistical Physics 18, 115–123 (1978), 10.1007/BF01014303. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 [Wu82] F. Y. Wu, The Potts model, Rev. Mod. Phys. 54(1), 235–268 (Jan 1982). 88 [Yan62] C. N. Yang, Concept of Off-Diagonal Long-Range Order and the Quantum Phases of Liquid He and of Superconductors, Rev. Mod. Phys. 34(4), 694–704 (Oct 1962). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 [ZB07] H. Q. Zhou and J. P. Barjaktarevic, Fidelity and quantum phase transitions, (Jan 2007), arXiv:cond-mat/0701608v1. . . . . . . . . . . . . . . . 50, 53, 74, 81, 126 References 195 [ZOV08] H.-Q. Zhou, R. Orús and G. Vidal, Ground State Fidelity from Tensor Network Representations, Phys. Rev. Lett. 100(8), 080601 (Feb 2008). 53, 71, 74, 82, 127, 128, 130 [ZPac06] P. Zanardi and N. Paunković, Ground state overlap and quantum phase transitions, Phys. Rev. E 74(3), 031123 (Sep 2006). . . 50, 53, 74, 81, 126 [ZU96] M. E. Zhitomirsky and K. Ueda, Valence-bond crystal phase of a frustrated spin-1/2 square-lattice antiferromagnet, Phys. Rev. B 54(13), 9007–9010 (Oct 1996). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 196 References