* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download 1. Topology Here are some basic definitions concerning topological
Survey
Document related concepts
Transcript
Lie Groups (GraMPs) Spring 2009 Hans Plesner Jakobsen 1. Topology Here are some basic definitions concerning topological spaces. You may think of the topology as coming from a metric, but other topologies may occur. Definition 1.1. A set X is called a topological space with topology T provided there is a family T of subsets of X for which the following holds: • X ∈ T and ∅ ∈ T . • O1 , . . . , Or ∈ T ⇒ O1 ∩ · · · ∩ Or ∈ T . • If {Oα }α∈A ⊆ T , then ∪α∈A Oα ∈ T (A is any index set). The sets in T are called the open subsets of X. If x ∈ X, a neighborhood of x is by definition any open subset O with x ∈ O. The topological space X is Hausdorff if (1) ∀x1 6= x2 ∈ X : ∃O1 , O2 ∈ T : x1 ∈ O1 , x2 ∈ O2 and O1 ∩ O2 = ∅. A set X can always be made into a topological space by choosing T = {X, ∅}. This is the trivial topology. At the other extreme, T = P(X) (all subsets of X) is also a topology. This is also of no practical use. We will write (X, T ) when we are in the situation of Definition 1.1. Definition 1.2. Let (X1 , T1 ) and (X2 , T2 ) be topological spaces. A map f : X1 → X2 is continuous (with respect to the given topologies) if (2) ∀U ∈ T2 : f −1 (U ) ∈ T1 . Definition 1.3. A subfamily B ⊆ T is called a basis for the topology or a basis for the open sets, if any O ∈ T can be written as a union of some sets from B. The space X is said to be second countable if there is a countable basis for the open sets. Definition 1.4. A topological space (X, T ) is disconnected if there are two open non-empty sets O1 , O2 of X such that (3) X = O1 ∪ O2 and O1 ∩ O2 = ∅. If X is not disconnected we say that it is connected. 2 Definition 1.5. If (X, T ) is a topological space and Y ⊂ X, (4) TY = {O ∩ Y | O ∈ T } defines a topology on Y called the relative topology or the subset topology. 2. Differentiable manifolds Definition 2.1. Let M be a second countable topological space. A differentiable structure on M (of dimension n) is a family A = {(χi , Oi )}i∈I , where I is an index set, such that M1 ∀i ∈ I : Oi is an open subset of M and χ is a homeomorphism of Oi onto the open set χ(Oi ) ⊆ Rn M2 M = ∪i∈I Oi . ∞ M3 ∀i, j ∈ I χi ◦ χ−1 map. j : χj (Oi ∩ Oj ) → χi (Oi ∩ Oj ) is a C Remark 2.2. Actually, it is a “differentiable structure of class C ∞ ” we have defined. In a similar fashion one may define structures of class C k for all k = 0, 1, . . . , as well as class C ω (analytic), but here we are only concerned with C ∞ . Definition 2.3. A set M is a differentiable manifold (an ndimensional differentiable manifold) if M is a second countable Hausdorff topological space with differentiable structure A = {(χ, Oi )}i∈I (of dimension n). Furthermore, A is an atlas, and the individual elements (χi , Oi ) are called charts. One also refers to (χi , Oi ) as “local coordinates” on M , or as a local parametrization. The n in the definition is called the dimension of M . Occasionally we will write M n. 3. Lie Groups Definition 3.1. A Lie Group G is is an abstract group which is also a differentiable manifold and where the two structures are compatible in the sense that the maps G × G 3 (g, h) → 7 g·h∈G G3g → 7 g −1 ∈ G are both smooth. 3.1. Matrix Lie Groups. Definition 3.2. A Matrix Lie Group is a closed subgroup of Gl(n, C) for some n. 3 In the case of a matrix Lie Group, the fundamental observation is that there exists an open ball Uε = B0 (ε) = {X ∈ Mn (C) | |X| < ε} such that Vε = exp(Uε ) is an open set in GL(n, C) with the property that G ∩ Vε = exp(Uε ∩ g), where g is the Lie algebra of G. For any g ∈ G one may define a chart (χg , g · Vε ) where the Vε is as above, and where (g · Vε ) 3 h 7→ χg (h) := log(g −1 h). Since the operations of taking inverse and of multiplication from the left are smooth in GL(n, C), and since log is smooth, it follows easily that the “change of coordinates” (χa )◦(χb )−1 are smooth for all a, b ∈ G where ever they are defined. Thus, a Matrix Lie Group is indeed a Lie Group. 4. The Matrix Lie Groups, their Lie algebras, and the exponential map We give here some fundamental definitions and results - the latter without proofs - relating to certain families of Lie groups and Lie algebras. Some of the definitions are particular for this course. Definition 4.1. By a matrix Lie group we understand a closed subgroup of either GL(n, R) or GL(n, C). Definition 4.2. A Lie algebra is a vector space g with a bilinear map [·, ·] : g × g 7→ g satisfying • (Skew symmetry) ∀X, Y ∈ g : [X, Y ] = −[Y, X]. • (The Jacobi identity) ∀X, Y, Z ∈ g : [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ] = 0. The map [·, ·] is called the Lie bracket. The linear structures may in fact be over any field F. If F = R we say that the Lie algebra is real, if F = C we say that it is complex. Definition 4.3. The Lie algebras gl(n, R) and gl(n, C) are defined to be the sets of all n × n real and complex,respectively, matrices equipped with the Lie bracket [X, Y ] = XY − Y X. Here XY is given by matrix multiplication, etc. 4 Remark 4.4. Any real subspace of gl(n, R) – or of gl(n, C)(!) – which is invariant under the Lie bracket is a real Lie algebra. Any complex subspace of gl(n, R) which is invariant under the Lie bracket is a complex Lie algebra. Example 4.5. u(n) = {H ∈ gl(n, C) | H ∗ = −H} is a real Lie algebra. su(n) = {H ∈ gl(n, C) | H ∗ = −H and Tr(H) = 0} is a real Lie algebra. Remark 4.6. By the famous Ado’s Theorem, any finite dimensional real Lie algebra is equivalent to one obtained from a subspace of gl(n, R) for some n. In contrast, there are very many finite-dimensional abstract Lie groups that do not have isomorphic images inside some GL(n, R) or GL(n, C). Definition 4.7. The exponential map gl(n, C) 7→ Gl(n, C) is given as ∞ X Ai A . ∀A ∈ gl(n, C) : exp(A) = e = i! i=0 P Ai P∞ kAki kAk Observe that k ∞ k ≤ , where kAk denotes the i=0 i! i=0 i! = e operator norm of A. A similar estimate holds for the Hilbert-Schmidt norm k · kHS ; c.f. Exercise 2.3.2. Proposition 4.8. The following useful facts hold: i) ∀s, t ∈ R : exp((s + t)A) = exp(sA) exp(tA). ii) More generally, if [A, B] = 0 then exp(A+B) = exp(A) exp(B). iii) det(eA ) = eT r(A) . iv) exp(A) is invertible; (exp(A))−1 = exp(−A). v) (exp(A))∗ = (exp(A∗ )). Moreover, we mention without proof Lemma 4.9. If R 3 t 7→ a(t) ∈ GL(n, C) and R 3 t 7→ b(t) ∈ GL(n, C) are differentiable functions, then da(t) db(t) d (a(t) · b(t)) = · b(t) + a(t) · (= a0 (t)b(t) + a(t)b0 (t)). dt dt dt Moreover, d exp(t · A) = A exp(t · A) = exp(t · A)A. dt 5 The first major theorem relating Lie groups and Lie algebras is the following: Theorem 4.10. Let G be a matrix Lie group. Then g := {H ∈ gl(n, R) | ∀t ∈ R : exp(tH) ∈ G} is a Lie subalgebra. A similar result is true for gl(n, C).1 We say that g is the algebra of G. Example 4.11. To find the Lie algebra su(2) of SU (2) we observe that for X to be in su(2) we must have ∀t : (exp(tX))(exp(tX))∗ = 1 and ∀t : det(exp(tX)) = 1. If we use v) in Proposition 4.8 and differentiate the first equation at t = 0 we obtain: X + X ∗ = 0, i.e. X is skew adjoint (iX is self adjoint). The second equation becomes etT r(X) = 1 for all t which implies that T r(X) = 0. Thus, ix λ | x ∈ R, λ ∈ C . su(2) = −λ −ix A basis of su(2) is given by e.g. 1 i 0 1 0 −i 1 0 1 (5) X = , Y = , Z= . 2 0 −i 2 −i 0 2 −1 0 Notice that (6) [X, Y ] = Z, [Y, Z] = X, [Z, X] = Y. Notice that “the same” computations give that u(n) = {X ∈ gl(n, C) | X+X ∗ = 0} and su(n) = {X ∈ gl(n, C) | X+X ∗ = 0, and T r(X) = 0}. Example 4.12. The Lie algebra so(3) of SO(3) is given as ( ! ) 0 a b −a 0 c | a, b, c ∈ R . so(3) = −b −c 0 A basis of so(3) is given by e.g. ! ! 0 1 0 0 0 1 0 0 0 , C= A = −1 0 0 , B = 0 0 0 −1 0 0 1The condition is still that ∀t ∈ R : exp(tH) ∈ G 0 0 0 0 0 −1 0 1 0 ! . 6 Definition 4.13. Let g, h be real Lie algebras. A real linear map dπ : g 7→ h that satisfies ∀X, Y ∈ g : dπ([X, Y ]) = [dπ(X); dπ(Y )], where the Lie brackets are computed in the relevant Lie algebras, is called a homomorphism. The definition for the complex case is analogous. In case h = gl(n, C) (viewed as a real Lie algebra) we say that dπ is a Lie algebra representation. In the latter case, if dπ(g) ⊆ u(n) we say that dπ is infinitesimally unitary. The second major theorem relating Lie groups and Lie algebras is: Theorem 4.14. Let φ : G 7→ H be a homomorphism between two matrix Lie groups G and H. Let the Lie algebras be denoted g, h, respectively. Then there exists a Lie algebra homomorphism dφ : g 7→ h such that ∀t ∈ R, ∀X ∈ g : φ(expG (tX)) = expH (tdφ(X)). (7) GO φ / expG HO expH g dφ / h In the special case where φ = π is an n-dimensional complex (unitary) representation of a matrix Lie group G we obtain a Lie algebra homomorphism dπ : g 7→ gl(n, C) (u(n)). In this case, dπ(X) is called the infinitesimal generator corresponding to X. Remark 4.15. It is of course not at all clear a priori why a Lie group homomorphism φ, which to begin with only is assumed to be continuous, is differentiable and hence can give rise to the map dφ not even in the special case of a continuous representation. For later use we mention that any homomorphism dφ : g 7→ h between real Lie algebras can be extended to the complexifications, gC = g ⊗R C and hC = h ⊗R C simply by setting dπ C (X1 + iX2 ) = dπ(X1 ) + idπ(X2 ). This is extremely useful. Since any matrix A ∈ gl(n, C) can be written uniquely as A = K1 + iK2 with K1 , K2 skew adjoint, it follows that any representation dπ of u(n) can be extended (complexified) to gl(n, C). 7 But even more so, any representation of gl(n, R) can be complexifed to arepresentation of gl(n, C). Hence, to any finite dimensional irreducible (contemplate this!) representation of gl(n, R) there corresponds an irreducible representation of u(n) (in the same space) and vice versa. 5. The classical groups 5.1. Bilinear forms and pseudo-orthogonal groups. We let B ∈ B(V1 , V2 , . . . Vn ) denote the set of multi linear maps V1 × V2 × · · · × Vn 7→ F in analogy to the case with the bilinear maps. Further, we set B(V, V ) = B2 (V ), and, in general, B(V, V, . . . , V ) = Bn (V ). | {z } n Definition 5.1. Let B ∈ B(V, W ). We say that B is non degenerate if ∀v ∈ V : B(v, w) = 0 ⇒ w = 0 ∀w ∈ W : B(v, w) = 0 ⇒ v = 0. Let, as usual, Sn denote the symmetric group in n letters and let Sgn(σ) = (−1)σ denote the sign of the permutation σ ∈ Sn . Definition 5.2. Let B ∈ Bn (V ). We say that B is symmetric if ∀v1 , v2 , . . . , vn ∈ V og ∀σ = (σ1 , . . . , σn ) ∈ Sn : B(vσ1 , . . . , vσn ) = B(v1 , . . . , vn ). Definition 5.3. Let B ∈ Bn (V ). We say that B is alternating if ∀v1 , v2 , . . . , vn ∈ V og ∀σ = (σ1 , . . . , σn ) ∈ Sn : B(vσ1 , . . . , vσn ) = Sgn(σ)B(v1 , . . . , vn ). Assume that B ∈ B2 (V ) is non degenerate and symmetric. Then we set (8) O(B, F) = {g ∈ AutF (V ) | ∀v1 , v2 ∈ V : B(gv1 , gv2 ) = B(v1 , v2 )} . One can say that O(B, F) is the invariance group of B. We also refer to O(B, F) as a pseudo-orthogonal group though this terminology is used mostly for the case with F = R. A real vector space V with a positive definite inner product (dot product) (·, ·) (here after just called an inner product) is, as is well known, called a (pre) Euclidean space. If V ' Rn we denote by (·, ·)E the usual inner product (9) (v, w)E = x1 y1 + x2 y2 · · · xn yn , 8 where v = (x1 , x2 , · · · , xn ) and w = (y1 , y2 , · · · , yn ) and we will also use this terminology if V ' Cn . As is well known, if (V, (·, ·)) is a real Euclidean vector space, a linear operator o : V 7→ V is said to be orthogonal if (10) ∀x, y ∈ V : (o(x), o(y)) = (x, y). We set O(V ) = {o | o orthogonal}, but if V = Rn we denote it as O(V ) = O(n) and we refer to it as the “orthogonal group in n dimensions”. In all cases, O(V ) is a group. Now let B be a symmetric non degenerate form on Rn . It is straightforward to show (see the problem sessions) that there exists a symmetric linear operator JB so that (11) ∀x, y ∈ Rn : B(x, y) = (x, JB (y))E . One must have det(JB ) 6= 0. Conversely, any such operator J (matrix) defines a symmetric non degenerate form BJ by (12) ∀x, y ∈ Rn : BJ (x, y) = (x, J(y))E . Definition 5.4. If J is symmetric and has p strictly positive eigenvalues and q strictly negative eigenvalues with p + q = n we define OJ (p, q) = {o ∈ Gl(n, R) | ∀x, y ∈ Rn : BJ (o(x), o(y)) = BJ (x, y)} = {o ∈ Gl(n, R) | ∀x, y ∈ Rn : (Jo(x), o(y))E = (Jx, y)E } = {o ∈ Gl(n, R) | o∗ Jo = J}2. The groups thus defined depend explicitly on J but are all isomorphic. For this reason, the J is usually dropped and one just writes O(p, q). Likewise, if Ba ∈ B2 (V ) is non degenerate and alternating (skew symmetric) (13) Sp(Ba , F) = {g ∈ AutF (V ) | ∀v1 , v2 ∈ V : Ba (gv1 , gv2 ) = Ba (v1 , v2 )}. Again one could say that Sp(Ba , F) is the invariance group of Ba . These are the (real or complex, depending on whether F = R or F = C.) symplectic groups. There is a skew symmetric linear operator JBa such that (14) ∀x, y ∈ V : Ba (x, y) = (x, JBa (y))E . 2Notice that one has: [∀x, y ∈ Rn : (A(x), y) = 0] ⇔ A = 0 - one can merely look at x = ei , y = ej for all the vectors ei in the canonical basis. 9 To be completely specific, if V = R2n (or V = C2n )3 we set 0 In JBa = −I 0 (15) n - where In is the identity on Rn (or Cn ) - and we set (16) Sp(n, R) = {g ∈ GL(2n, R) | ∀v1 , v2 ∈ V : (JBa gv1 , gv2 ) = (JBa v1 , v2 )}, with an analogous definition in the complex case. 5.2. Hermitean forms. We remind you that a sesquilinear form Hs in a complex vector space V is a map V × V 3 v, w 7→ Hs (v, w) ∈ C, (17) satisfying: ∀λ ∈ C : ∀v1 , v2 , w, ∈ V : Hs (v1 + λv2 , w) = Hs (v1 , w) + λHs (v2 , w) (linearity in the 1. var.) and ∀λ ∈ C : ∀v, w1 , w2 , ∈ V : Hs (v, w1 + λw2 ) = Hs (v, w1 ) + λHs (v, w2 ) (anti-linearity in the 2. var.) We say that the sesquilinear form Hs (henceforth denoted H) is hermitian provided (18) ∀v, w ∈ V : H(w, v) = H(v, w). (and call it skew hermitian if ∀v, w ∈ V : H(w, v) = −H(v, w) - but we will not get involved with those here...) Finally, a hermitian form H is positive definite if (19) ∀v ∈ V : v 6= 0 ⇒ H(v, v) > 0. As is well known, a complex vector space V with a positive definite hermitian form h·, ·i (hereafter just an inner product) is called a (pre) Hilbert space. If V ' Cn , we denote by h·, ·i0 the usual inner product; (20) hv, wi0 = x1 y1 + x2 y2 · · · xn yn , if v = (x1 , x2 , · · · , xn ) and w = (y1 , y2 , · · · , yn ). We set U(H) = {U | U unitary}, but if H = Cn we usually write U(H) = U (n) and call it the “unitary group in n dimensions”. Quite generally, U(H) is a group. 3Notice that it says 2n! 10 Now, let H be a Hermitean form on Cn . If we do not maintain the assumption about it being positive definite, it is natural to assume in stead that H is non degenerate.4 So, this is what we do. It is straightforward to prove /see the problem sessions) that there is a Hermitean (or self adjoint) operator JH 5 (21) ∀x, y ∈ Cn : H(x, y) = hx, JH (y)i0 . One must have that det(JH ) 6= 0. Conversely, any such operator operator (matrix) J defines a Hermitean form HJ by (22) ∀x, y ∈ Cn : HJ (x, y) = hx, J(y)i0 . Definition 5.5. If J is Hermitean and has p strictly positive eigenvalues and q strictly negative eigenvalues with p + q = n we define UJ (p, q) = {u ∈ Gl(n, C) | ∀x, y ∈ Cn : HJ (u(x), u(y)) = HJ (x, y)} = {u ∈ Gl(n, C) | ∀x, y ∈ Cn : hJu(x), u(y)i0 = hJx, yi0 } = {u ∈ Gl(n, C) | u∗ Ju = J}. As in Definition 5.4, one usually drops the J and just writes U (p, q). 6. The Canonical Commutation Relations Let Ba be a non-degenerate skew-symmetric form on a finite-dimensional real vector space V . We know that V then is even dimensional; dim V = 2n Definition 6.1. The Heisenberg Algebra hV , or just h(n), based on V is the vector space V × R equipped with the Lie bracket (23) ∀(v, c), (v1 , c1 ) ∈ V × R : [(v, c), (v1 , c1 )] = (0, Ba (v, v1 )). The Heisenberg Group H(V ), or just H(n), based on V is the vector space V × R equipped with the product (24) 1 ∀(v, c), (v1 , c1 ) ∈ V ×R : (v, c)?(v1 , c1 ) = (v +v1 , c+c1 + Ba (v, v1 )). 2 One can easily bring the skew-symmetric form into ‘canonical form’: Specifically, there is a basis where one may write the elements v of V as v = (q, p) with q, p ∈ Rn and such that 4This defined completely analogous to the case with bilinear forms. 5So. J satisfies: ∀x, y ∈ Cn : hJ (x), yi = hx, J (y)i. More generally, the adjoint T ∗ to H H H the linear operator T on the Hilbert space (H, h·, ·i) is defined by the equation ∀x, y ∈ V : hT ∗ (x), yi = hx, T (y)i. T is then Hermitian if T ∗ = T . 11 q q1 (25) Ba ( p , p ) = q · p1 − q1 · p 1 where we use the usual ·-product in Rn . We shall use this form in the sequel and both on the algebra and on the group level. Consider 0 √12 qr √12 pr c 0 0 √1 ps 0 , 2 (26) 0 0 1 0 − √2 q s 0 0 0 0 where the vectors in the first row are written in row form and the vectors in the last column are written in column form. This formula defines a faithful representation of h(n) as can be easily seen. Likewise, 1 √12 qr √12 pr c 0 1 √1 ps 0 , 2 (27) 0 0 1 1 − √2 q s 0 0 0 1 is a faithful 2 + 2n-dimensional representation of H(n). Other matrix versions of h(n) and H(n) are also in use, for instance 0 q1 q2 · · · qn z 0 0 0 . . . 0 p 1 . . . . . 0 . . . 0 . g h(n) = . | qi , pi , z ∈ R . .. 0 p n−1 0 0 0 ... 0 p n 0 0 0 ... 0 0 The analogous formula with 1’s in the diagonal is then an isomorphic version of H(n). In the sequel we will use directly the definitions (23) and (24) together with (25). The following formula6 gives the Stone-von Neumann representation of the canonical commutation relations. Theorem 6.2. The following formula defines a strongly continuous unitary representation of H(n) in L2 (Rn ): (28) 1 (U (q, p, c)f ) (x) = ei~(x·q+c− 2 q·p) f (x − p). 6Notice two small changes from the lectures which is introduced to make the formula more natural. 12 Proof: The group property follows easily. To prove continuity, it suffices to prove that the map (q, p, c) 7→ U (q, p, c)f is continuous for f in a dense subspace. Here one may choose the Schwartz Space S(Rn ), or the space Cc∞ (Rn ) of smooth functions with compact support. For such functions the result follows easily by Lebesgue’s Theorem on Dominated Convergence. We will later indicate a proof of the Stone-von Neumann Uniqueness Theorem which states that this representation is irreducible and essentially unique7. However, we wish to use this result immediately and for this reason we now turn to the Symplectic Group: Theorem 6.3. Let, for g ∈ Sp(n, R), g • (q1 , · · · , qn , p1 , · · · , pn ) denote the natural linear action of g on R2n . The function Sp(n, R) × H(n) 7→ H(n) : (g, (q1 , · · · , qn , p1 , · · · , yp ), c ) 7→ g ? (q1 , · · · , qn , p1 , · · · , pn ), c = g • (q1 , · · · , qn , p1 , · · · , pn ), c defines an action of Sp(n, R) on H(n) by automorphisms. 7Two different choice of the positive constant ~ will give inequivalent representations, but besides this, the representation is unique.