Download Lectures five and six

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Jordan normal form wikipedia , lookup

Matrix calculus wikipedia , lookup

System of linear equations wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Capelli's identity wikipedia , lookup

Vector space wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Exterior algebra wikipedia , lookup

Four-vector wikipedia , lookup

Symmetric cone wikipedia , lookup

Transcript
LECTURES 5 AND 6 OF LIE-THEORETIC METHODS
VIPUL NAIK
Abstract. These are lecture notes for the course titled Lie-theoretic methods in analysis taught by
Professor Alladi Sitaram. In these lecture notes, Professor Sitaram explicitly describes representations
of SU (2) on spaces of homogeneous polynomials and proves the irreducibility of these representations.
1. Recapitulation from last time
1.1. The commutative diagram. Note that π is smooth, and in fact:
π̇(X) =
d
|t=0 π(exp tX)
dt
1.2. Invariant subspaces.
Definition. A subspace W of a space V is said to be invariant under a Lie group (respectively Lie
algebra) if every element of the Lie group (respectively Lie algebra) takes the subspace to within itself.
Claim. Let G be a connected linear Lie group with Lie algebra g. Suppose G acts on a vector space
V , and we consider the induced action of g on V . For a linear subspace W of V :
W is g-invariant ⇐⇒ W is G-invariant
Proof. The forward implication: Suppose W is g-invariant. Since G is connected, it is the product of
exponentials of elements from g (we proved this in an earlier lecture). It thus suffices to show that the
exponential of any element of g leaves W invariant.
Let X ∈ g and v ∈ W . Consider exp(X). We need to show that π(exp(X))v ∈ W . By the commutative
diagram, this reduces to showing that exp(π̇(X))(v) is in W .
Let Y = π̇(X). We need to show that exp(Y )(v) ∈ W , or equivalently that the following element is
in W :
Y2
v...
2!
We know that Y n v ∈ W because W is invariant under π̇. Thus, since W is a vector space, all the
partial sums of the above series are in W . Further, since W is topologically closed (on account of being
a subspace of a finite-dimensional vector space) the limit is also in W , and we are done.
The backward implication: We assume that W is G-invariant. Let X ∈ g and v ∈ W . We need to
show that π̇(X)(v) ∈ W .
For every t, the element exp tX is in G. We now differentiate both sides and again argue that since
W is a closed subspace, the value of the limit thus obtained is inside W .
v+Yv+
1.3. Irreducible representations.
Definition (Irreducible representation for a Lie group, Lie algebra). A representation of a Lie group
(respectively Lie algebra) is said to be irreducible(defined) if the only invariant subspaces for the representation are the trivial space and the whole space. In other words, there are no proper nontrivial
invariant subspaces.
c
Vipul
Naik, B.Sc. (Hons) Math and C.S., Chennai Mathematical Institute.
1
From the claim in the previous subsection, we obtain the following claim:
Claim. A representation of a connected linear Lie group is irreducible if and only if the corresponding
representation of its Lie algebra is irreducible.
This gives us a “way” of proving that a given representation of a linear Lie group is irreducible.
2. Spaces of homogeneous polynomials
2.1. Action of general linear group on these. The group GL(r, k), for any field k, acts on the space
of all polynomials in r variables by naturally acting on the variables as per the linear transformations. In
this action, the spaces of homogeneous polynomials of degree n are invariant subspaces, and since every
polynomial can be uniquely expressed as a direct sum of homogeneous polynomials, they in fact form a
direct sum decomposition into invariant subspaces.
Since this is a representation of the general linear group, we can also use it to give rise to representations
of subgroups of the general linear group.
2.2. Representations of SU (2) on spaces of homogeneous polynomials. For any n, consider the
space of homogeneous polynomials of degree n in 2 complex variables (say z1 and z2 ). This space has a
natural basis: the polynomials of the form Φk (z1 , z2 ) = z1k z2n−k as 0 ≤ k ≤ n.
SU (2) acts on the space by sending the column vector for z1 and z2 to the matrix times the column
vector, and thus correspondingly altering the homogeneous polynomial.
Note that the representation is (n + 1)-dimensional.
2.3. Irreducibility of the representations. To prove that each of the representations on spaces of
homoegenous polynomials are irreducible, it suffices to show that the corresponding representations of
the Lie algebra (which we denote as su(2)) are irreducible. Recall that su(2) is the space of traceless
skew=Hermitian matrices.
To prove that any representation of su(2) on a space of homogeneous polynomials is irreducible, we
consider the following real basis for su(2):
X1
X2
X3
=
=
=
1
2
1
2
1
2
0 i
i 0
0 −1
1
0
i
0
0 −i
Here are the exponential relations for these:
exp(tX1 )
=
exp(tX2 )
=
exp(tX3 )
=
cos(t/2) i sin(t/2)
i sin(t/2) cos(t/2)
cos(t/2)
− sin(t/2)
eit/2
0 e−it/2
sin(t/2)
cos(t/2)
0
We examine the effect of π˙n (Xi ) on each of the Φk s.
In order to do this, we determine the effect of π(etXi ) on Φk and then differentiate as a function of t.
We get:
2
(π˙n (X1 )Φk )(z1 , z2 )
=
(π˙n (X3 )Φk )(z1 , z2 )
=
(π˙n (X3 )Φk )(z1 , z2 )
=
i
(kΦk−1 + (n − k)Φk+1 )
2
1
(kΦk−1 − (n − k)Φk+1 )
2
2k − n
iΦk
2
The third equation tells us that the Φk s are eigenvectors with distinct eigenvalues for the operator
π˙n (X3 ).
Claim. For any n, the representation π˙n is an irreducible representation of the Lie algebra.
Proof. Let V denote the space of homogeneous polynomials of degree n. Suppose W is a nontrivial
invariant subspace of V . We need to show that W is the whole of V .
First, note that since W is an invariant subspace, the operator π˙n (X3 ) acts on W as well. Since we
are working over complex numbers, there is an eigenvector inside W . But since all the eigenspaces are
one-dimensional with eigenvectors as Φk s, we conclude that at least one of the Φk s lies inside W . But
then, invariance under the operators π̇(X1 ) and π̇(X2 ) implies that all the Φk s are in W , and this forces
W =V.
This is typical to a general idea: to show that a representation of a Lie algebra is irreducible, obtain a
collection of operators such that one operator has all eigenvalues distinct, and the other operators ensure
that any invariant subspace containing one eigenvector must contain all others.
3. Continuous implies differentiable
3.1. For reals.
Claim. Let φ be a continuous homomorphism from R into GL(V ). Then φ is differentiable and
φ(t) = etA where A = φ(0).
Proof. The idea is to imitate the way we solve a “differential equation” over the real numbers.
Note that:
φ̇(s)
=⇒ φ̇(s)
= φ(s)φ̇(0)
= esφ̇(0)
We need to show that under the assumptions of φ being continuous and of being a homomorphism, φ
is differentiable.
For a fixed h, consider:
Z
h
φ(t + s) dt
0
Z
=⇒ φ(s)
=
R s+h
s
φ(u) du
h
Z
φ(t) dt
=
0
s+h
φ(u) du
s
The right-hand side, when viewed as a function of s, becomes differentiable (because it is the integral
of a continuous function). Note that we cannot directly comment on the differentiability of the left-hand
side because we only know that φ is continuous.
However, now that we know that the right-hand side is differentiable as a function of s, we know that
Rh
so is the left-hand side, and since the expression 0 φ(t) dt yields an invertible matrix for h sufficiently
close to 0, we have expressed φ(s) as a differentiable function divided by an invertible matrix (in a
sufficiently small neighbourhood). Hence φ(s) is differentiable.
3
4. Proof of the commutative diagram
4.1. Background-filling. The gist of what we need to prove is that to every Lie group homomorphism,
there is a corresponding Lie algebra homomorphism, obtained merely by taking the differential at the
identity. Since the differential map is clearly well-defined, we only need to prove that it actually is a Lie
algebra homomorphism.
We thus need to prove that:
π̇(λX) = λ(π̇X)
π̇(X + Y ) = π̇(X) + π̇(Y )
˙
π̇[X, Y ] = [piX,
π̇Y ]
The idea behind both parts of the proof is to use the same techniques as we used in showing that the
Lie algebra of any Lie group is a Lie algebra.
4.2. Vector space part of proof. To prove this, we use the identity:
exp X + Y = lim (exp(X/n) exp(Y /n))1/n
n→∞
and apply this limit at both ends, using the fact that at the group level it is a homomorphism.
More specifically, start off with exp(tπ̇(X + Y )), use the above limit expression, use the fact that π is
a homomorphism, and then again use the limit expression to simplify.
4.3. Lie algebra part of proof. Here, we use the identity:
1
exp[X, Y ] = lim {exp(X/n) exp(Y /n)} /n2
n→∞
And again starting off from t exp[π̇X, π̇Y ] we convert to the limit expression, use the fact that π is a
homomorphism, and then again convert the limit expression back to a commutator.
5. Some differential operators
5.1. A first-order differential operator. Let X denote an element of the Lie algebra. Then X can
also be thought of as a vector field (by left multiplication or right multiplication) and this vector field
will respectively be a left-invariant (respectively right-invariant).
We know that a vector field defines a first-order homogeneous linear differential operator on the
collection of functions. Hence, elements of the Lie algebra can be viewed as first-order linear differential
operators.
We can in fact do this differentiation (component-wise) even for vector-valued functions. In particular,
we can do it for matrices (where we basically do it matrix-entry-wise.
We have the important formula:
(1)
(X̃π)(g) = π(g)π̇(X)
Thus, applying X̃ to a matrix coefficient gives a linear combination of various matrix coefficients.
The above equation actually connected the action of X as a differential operator with its Lie algebra
representation.
5.2. What we’ll be trying to do. We’ll be trying to manufacture higher order differential operators
(which are basically obtained as sums of composites of first-order ones).
4
Index
irreducible representation, 1
representation
irreducible, 1
5