Download Lectures nine, ten, eleven and twelve

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Distribution (mathematics) wikipedia , lookup

Lp space wikipedia , lookup

Transcript
LECTURES 9 AND 10 OF LIE-THEORETIC METHODS
VIPUL NAIK
Abstract. These are lecture notes for the course titled Lie-theoretic methods in analysis taught
by Professor Alladi Sitaram. In these lecture notes, Professor Sitaram discusses the Peter-Weyl theorem
which is the analogue in compact groups of the fact that representative functions generate all functions
on a finite group.
1. The Peter-Weyl theorem
1.1. Haar measure. Let G be a compact linear Lie group. Then, there exists a unique volume element
dg on G with the following properties:
R
(1) The integral G dg = 1
R
R
(2) The integral is invariant under left translation, viz: GR f (gh)dh = GR f (h)dh
(3) The integral is invariant under right translation, viz: G f (hg)dh = G f (h)dh
1.2. Extending results from finite to compact groups. Recall that if (π, V ) is a finite-dimensional
representation of a finite group, it can be conjugated to within a unitary representation. The same is in
fact true for any compact group.
More specifically, as for finite groups, we can say the following:
(1) If (π, V ) is a finite-dimensional representation of a compact group G, then π can be conjugated
to within a unitary representation
(2) The representative functions for distinct irreducible representations of a compact group are orthogonal.
(3) Given an orthonormal basis for an irreducible representation of a compact group, the entry
functions are pairwise orthogonal and form a basis for the space of orthogonal functions.
1.3. Establishing some hypotheses. Consider the space of all functions that can be expressed as
integer linear compbinations of the representative functions. These are clearly the functions that could
occur as matrix entries for some basis for some representation (not necessarily irreducible).
Call this space R. We see the following:
(1) R is a C-vector space and contains constants
(2) R is closed under complex conjugation. This is because the complex conjguate of a representative
function for some irreducible representation is the representative function for its contragedient
representation
(3) R is closed under multiplication. This is because the product of two representative functions is
a matrix entry in the tensor product of the representations
(4) Any two elements in G are separated by a representative function for some irreducible representation (that is, by some element in R)
We now notice that these are precisely the hypotheses for applying the Stone-Weierstrass Theorem
to the space C(G) of continuous functions on G. We hence conclude that R is dense in the space of
continuous functions on G.
1.4. Peter-Weyl theorem. The Peter-Weyl theorem in essence says that if f is a continuous function
then to occurs as an infinite sum of the basis representative functions. However, the Peter-Weyl theorem,
on its own, does not guarantee a strong convergence of any sort. For a stronger convergence, we need to
appeal to differentiability.
c
Vipul
Naik, B.Sc. (Hons) Math and C.S., Chennai Mathematical Institute.
1
2. Haar measure on SU (2)
2.1. The goal. The goal is to explicitly describe a Haar measure on SU (2) with respect to which we
can define and then discuss properties of the integral.
First, observe that SU (2) is the same as S 3 . Thus, describing a Haar measure on SU (2) reduces to
defining a Haar measure on S 3 . But S 3 embeds in R4 , in fact, it embeds in such a way that if we choose
polar coordinates, it is precisely the subset where r takes a fixed value. Using the volume element in R4
and this fact, we get:
Let g be parameterized by (θ, φ, ψ) (the r can be removed as r = 1). Then dg = 2π1 2 sin2 θ sin φdθdφdψ.
2.2. The embedding of SU (2) in SO(4). Note that since SU (2) is the same as S 3 , the left and right
multiplication maps in SU (2) respectively define self-maps onf S 3 . These self-maps extend to isometries
of R4 . Since left and right multiplication commute, we get:
SU (2) × SU (2) → SO(4)
Note that surjectivity follows by comparing dimensions. Further, since SU (2) is simply connected, so
is SU (2) × SU (2) and hence it is a universal covering space of SO(4).
Further, since SU (2) is a double cover of SO(3) we can say that SO(3) × SO(3) “looks like” SO(4)
in the sense that they are isogenic (there is a group with maps of finite kernel to each of them).
In general, the map SU (n) × SU (n) → SO(2n) still exists but it may not be surjective.
3. Lecture ten: what was done
3.1. The Fourier series for SU (2). Suppose f is a square-integrable function:
Z
|f (g)| dg < ∞
Now, let’s consider the case of SU (2) and square-integrable functions on SU (2). We know that a basis
(n)
(n)
for these is given by the functions φij where φij denotes the (ij)th matrix entry of the representation
φ(n) with respect to the standard basis of monomials.
As in the abstract case, we can, by abstract nonsense, conclude that for L2 functions the Fourier series
converges to the function in the mean square sense.
We now want to prove that convergence actually happens in the L2 sense.
3.2. Uniform convergence. We shall prove that for a C 2 function, the Fourier series converges uniformly. Let’s first write out an explicit description of the Fourier coefficients:
Z
(n)
anij =
f (G)ψij (g)dg
G
We want to show that the πn s exhaust all irreducible representations of G. This is equivalent to
showing that if f ⊥ pin for every n, then f is the zero function.
3.3. Proof that these are all the irreducible representations. Earlier, we had given a proof that
the only irreducible representations are the πn s, by actually starting with an irreducible representation
and locating a basis on which the operators X1 , X2 and X3 act the way they do on the Φk s.
The proof we give now uses the orthogonality relations: it shows that if f is orthogonal to all the
representative functions, then it must be the zero function.
Here’s the proof of that:
First, write the elements of SU (2) using the identification with S 3 and the polar coordinates from
R4 for S 3 . This is the (θ, φ, ψ) description of SU (2) where θ ∈ [0 , π], φ ∈ [0 , π] and ψ ∈ [0 , π] and
ψ ∈ [0 , 2π].
Consider any g ∈ G, g is conjugate to g(θ) for the same θ. Here g(θ) is the matrix
iθ
e
0
0 e−iθ
The conjugacy classes are precisely these latitudes (with the poles corresponding to the identity element
etc.). Thus, we can first integrate, for a given latitude, over that latitude, and then integrate over θ.
Once we integrate over a latitude, we can average out, and we get a class function – a function constant
on every conjugacy class. We know that the characters form an orthonormal basis for the space of class
2
functions. More specifically, within the direct sum of the representative function spaces for a bunch of
representations, the characters generate all the class functions.
Thus, the problem of showing that f is orthogonal to every representative function reduces to the
problem of proving that if we take the class function associated with f , this class function is orthogonal
to every character.
3.4. For the class functions. For the representations πn , we have:
χn (h(θ)) =
sin(n + 1)θ
sin θ
defined as (n + 1) at 0.
To now prove that any function orthogonal to these has to be the zero function, we appeal to the
theory of Fourier series on S 1 by extending the domain to [−π , π] and using some basic logic.
4. Using smoothness to guarantee uniform convergence
In this lecture, we discuss the gory details of why being C 2 is a sufficient condition for uniform
convergence.
4.1. Some fancy-looking stuff. We know that the product rule, or the integration by parts rule, can
be viewed as follows:
b
Z
(f 0 g) = −
Z
a
b
f g0
a
Which basically follows from the fact that f 0 g + f g 0 is an exact differential.
In our case, we get the following more general version:
D
E D
E
X̃f , h = f , −X̃h
Suppose now we take Y = X̃ X̃. Then:
h Yf , h i=h f , Yh i
Thus, Y is a self-adjoint second-order linear differential operator.
4.2. Action of the Lie algebra. Recall the following:
π˙n (X1 )(φk )
=
π˙n (X2 )(φk )
=
π˙n (X3 )(φk )
=
i
(kφk + (n − k)φk+1 )
2
1
(kφk + (k − n)φk+1 )
2
(2k − n)i
φk
2
Now consider:
Ω=
−1 X̃1 X̃1 + X̃2 X̃2 + X̃3 X̃3
2
Now we have:
X̃πij (x) = (π(X)π̇(X))ij
Further, we have that:
1
−1
π˙n (X1 )2 + π˙n (X2 )2 + π˙n (X3 )2 = n(n + 2)I
2
8
This gives us:
1
(n)
n(n + 2)πij
8
Normalize πij to get ψij , and we have that each of the ψij s are eigenvectors with eigenvalues n(n+2)/8.
(n)
Ω(πij =
3
4.3. The situation we have. Ω is a linear differential operator, with the property that all the representative functions have Ω as an eigenvector, with the representative functions of a particular πn having
the same eigenvalue which is a quadratic function of n.
This is akin to the S 1 situation where the d/dx operator had as its eigenvalues the functions einx for
different n.
Theorem 1. Sn (f ) → f where f ∈ C 2 .
Proof. TO prove this, we show that Sn (f ) is uniformly Cauchy, viz we show that:
(1)
X X (n) 2
aij < ∞
(2)
X
sum
n(n + 2) (n) 2
aij < ∞
8
(3)
X (n) 2
πij = n + 1
(4)
(n)
ψij =
√
(n)
n + 1πij (x)
We can use these, along with Cauchy-Schwarz, to prove the theorem.
4.4. Terminology and fancy language. We often say the following:
(1) We say that the operator Ω has a spectrum comprising the values n(n + 2)/8 with the representative functions as eigenvectors. Thus, the eigenvector spaces are of dimensions (n + 1)2 .
(2) The fact that every function is an infinite linear combination of representative functions is expressed by saying that every function is a superposition of eigenfunctions of Ω.
(3) The representative functions are called spherical harmonics. Thus ,we often say that every
function is a superposition of spherical harmonics (which play a role analogous to the exponential
maps which are the circular harmonics).
4.5. General approach for a compact group. For any compact linear Lie group, do the following:
(1) Consider the inner product on the Lie algebra that is invariant under the adjoint action of the
Lie group
(2) Obtain an orthonormal basis with respect to this inner product
(3) Choose the sum of squares for that (this is called the Laplacian of the Lie group)
The Laplacian may not itself work, but usually some polynomial in that might work. We get a formula
in terms of many variables, and the eigenvalues are polynomially controlled.
All this gives a Weyl character formula (that’s a rathertrciky thing that was only mentioned, and not
worked out, in this lecture).
4.6. For the case of SO(3). Functions on SO(3) are the same as antipode-invariant function on
SU (2) = S 3 . Thus, the condition of being a function on SO(3) can be captured by requiring that
certain pairs of coefficients in the Fourier series for the function on SU (2) be equal.
4.7. For the case of S 2 . We have:
S 2 = SO(3)/SO(2)
Thus, functinos on S 2 correspond to functions on SO(3) that are invariant under right multiplication
by SO(2).
4
5. Three general situations
5.1. The situation of a group and a maximal compact subgroup. The situation is as follows:
(1) G is a linear Lie group
(2) K is a compact subgroup of G (unique upto conjugacy)
(3) We want to study functions on the coset space G/K.
Three particular cases:
(1) G = SO(3) and K = SO(2). Then G/K = S 2
(2) G = SL(2) and K = SO(2). Then G/K is the upper half-plane
(3) G = Rn o SO(n) and K = SO(n). Then G/K is Rn
In all the cases, the ring of invariant polynomials is the polnomial ring in a suitably defined Laplacian.
5