Download POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Positional notation wikipedia , lookup

Law of large numbers wikipedia , lookup

Big O notation wikipedia , lookup

Location arithmetic wikipedia , lookup

Georg Cantor's first set theory article wikipedia , lookup

Large numbers wikipedia , lookup

Algebra wikipedia , lookup

Recurrence relation wikipedia , lookup

Addition wikipedia , lookup

Elementary mathematics wikipedia , lookup

Horner's method wikipedia , lookup

Proofs of Fermat's little theorem wikipedia , lookup

Vincent's theorem wikipedia , lookup

Polynomial wikipedia , lookup

System of polynomial equations wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Transcript
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
DINUSHI MUNASINGHE
−
+
Abstract. Given two standard partitions λ+ = (λ+
1 ≥ · · · ≥ λs ) and λ− = (λ1 ≥
−
· · · ≥ λr ) we write λ = (λ+ , λ− ) and set sλ (x1 , . . . , xt ) := sλt (x1 , . . . , xt ) where
−
+
−
λt = (λ+
1 , . . . , λs , 0, . . . , 0, −λr , . . . , −λ1 ) has t parts. We provide two proofs that
the Kostka numbers kλµ (t) in the expansion
X
sλ (x1 , . . . , xt ) =
kλµ (t)mµ (x1 , . . . , xt )
µ
demonstrate polynomial behaviour, and establish the degrees and leading coefficients
of these polynomials. We conclude with a discussion of two families of examples that
introduce areas of possible further exploration.
Contents
1. Introduction
2
2. Representations of GLn
2.1. Representations
3
3
2.2. Characters of GLn
4
3. Symmetric Polynomials
5
3.1. Kostka Numbers
3.2. Littlewood-Richardson Coefficients
6
6
3.3. Representations and Characters
7
4. Restrictions
7
4.1. Interlacing Partitions
4.2. Branching Rules
7
8
5. Beyond Polynomial Representations
9
6. Weight Decompositions
11
7. Some Notation
8. Stability
11
12
8.1. Kostka Numbers
12
8.2. Littlewood-Richardson Coefficients
13
9. Asymptotics
10. The Polynomial Behaviour of Kostka Numbers
15
16
11. Further Asymptotics
20
1
2
DINUSHI MUNASINGHE
12. Remarks
Appendix: Solving Difference Equations
23
24
References
24
1. Introduction
The Kostka numbers, which we will denote kλµ , were introduced by Carl Kostka in
his 1882 work relating to symmetric functions. The Littlewood-Richardson coefficients
cνλµ were introduced by Dudley Ernest Littlewood and Archibald Read Richardson in
their 1934 paper Group Characters and Algebra. These two combinatorial objects are
ubiquitous, appearing in situations in combinatorics and representation theory, which
we will address in this work, as well as algebraic geometry.
Being such natural objects, the Kostka numbers and Littlewood-Richardson coefficients have been studied extensively. There are explicit combinatorial algorithms for
computing both types of coefficients, and many well-established properties. There still
exist, however, open questions and areas which remain unexplored. One area of study
which contains interesting open questions relating to these coefficients is that of asymptotics, which we will explore in this work.
We define the Littlewood-Richardson coefficient cνλµ as the multiplicity of the Schur
polynomial sν (x1 , . . . , xt ) in the decomposition of the product of Schur polynomials
X
sλ (x1 , . . . , xt )sµ (x1 , . . . , xt ) =
cνλµ sν (x1 , . . . , xt ).
ν
Similarly, we can define the Kostka number kλµ as the coefficient of mµ (x1 , . . . , xt ) in
the expansion of the Schur polynomial sλ (x1 , . . . , xt ) as
X
kλµ mµ (x1 , . . . , xt )
sλ (x1 , . . . , xt ) =
µ
where mµ (x1 , . . . , xt ) denotes the minimal symmetric polynomial generated by the monomial xµ . In the above formulas, the number of variables, t, may be arbitrary. However,
cνλµ and kλµ become independent of t as it becomes sufficiently large.
We begin our study by extending the definition of the Schur polynomial sλ (x1 , . . . , xt )
to arbitrary weakly decreasing sequences of integers λ = (λ1 ≥ · · · ≥ λt ) by shifting λ
to λ + ct , i.e.
1
sλ (x1 , . . . , xt ) =
sλ+ct (x1 , . . . , xt ).
(x1 . . . xt )c
−
+
−
Given two standard partitions λ+ = (λ+
1 ≥ · · · ≥ λs ) and λ− = (λ1 ≥ · · · ≥ λr ) we
write λ = (λ+ , λ− ) and set
sλ (x1 , . . . , xt ) := sλt (x1 , . . . , xt )
−
+
−
where λt = (λ+
1 , . . . , λs , 0, . . . , 0, −λr , . . . , −λ1 ) has t parts.
We consider the expansions
(1)
sλ (x1 , . . . , xt )sµ (x1 , . . . , xt ) =
X
ν
cνλµ (t)sν (x1 , . . . , xt )
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
3
and
(2)
sλ (x1 , . . . , xt ) =
X
kλµ (t)mµ (x1 , . . . , xt )
µ
where all λ, µ, and ν above are pairs λ = (λ+ , λ− ), µ = (µ+ , µ− ), and ν = (ν+ , ν− ).
Both sums in (1) and (2) are finite. A result of Penkov and Styrkas, [PS], implies
that the coefficients cνλµ (t) stabilize for sufficiently large t. The coefficients kλµ (t), on
the other hand, demonstrate polynomial behaviour. This result has been known for a
while, see for example [BKLS] and the references therein. The methods used by Benkart
and collaborators apply to representations of groups beyond GLn , but provide little
information about the Kostka polynomials themselves.
The purpose of this work is to give two proofs of the fact that kλµ (t) is a polynomial in
t. The proofs given are specific to GLn , but will hopefully lead to formulas or algorithms
for computing these polynomials explicitly. The first proof uses the stability of the
Littlewood-Richardson coefficients, and the second uses a recurrence relation established
through branching rules.
We end the document with a discussion of two families of examples which will suggest
further generalizations and directions of study.
Conventions. Unless otherwise stated, we will be working over C, representations are
finite dimensional, and N = Z≥0 . We denote by [k] the set of integers {1, . . . , k}.
2. Representations of GLn
Definition 1. We define a polynomial representation of the group G = GLn to be
a group homomorphism
ρ : G → GL(V ) ∼
= GLN
where N = dim(V ) is the dimension of the representation, such that for every A =
(aij ) ∈ GLn , ρ(A) = (ρij (A)), where each entry ρij (A) is a polynomial in the entries akl
of A ∈ GLn .
In this work we will refer to polynomial representations simply as representations
of GLn . For a more extensive discussion, one can refer to [GW] and [FH].
2.1. Representations. All finite dimensional representations of GLn are completely
reducible - i.e., if ρ is a representation of GLn such that N = dim(V ) < ∞, then there
is a decomposition of V as the direct sum of irreducible representations. It is therefore
natural to want to parameterize all possible non-isomorphic irreducible representations,
and when given a representation, to ask how it could be decomposed as the direct sum
of irreducible components. We turn our attention to the Schur-Weyl Duality, which will
allow us to build all irreducible representations of GLn .
2.1.1. Schur-Weyl Duality. The Schur-Weyl Duality relates the irreducible representations of the General Linear Group, GLn , to those of the Symmetric group Sk . We begin
by considering the k th tensor power of V, V ⊗k , where V = Cn . The General Linear group
GLn acts on Cn by matrix multiplication, so it acts on V ⊗k by
g · (v1 ⊗ · · · ⊗ vk ) = g · v1 ⊗ · · · ⊗ g · vk
4
DINUSHI MUNASINGHE
for g ∈ GLn . On the other hand, Sk acts on V ⊗k by permuting the terms of v1 ⊗· · ·⊗vk ∈
V ⊗k . That is to say,
σ · (v1 ⊗ · · · ⊗ vn ) = vσ−1 (1) ⊗ · · · ⊗ vσ−1 (n) .
These actions commute, so V ⊗k is a GLn ×Sk -module. We thus have the decomposition:
M
V ⊗k ∼
Vλ ⊗ S λ
=
λ
where the sum runs over all λ partitioning k, the S λ are the Sk -modules known as
Specht modules, and the Vλ are GLn -modules. In particular, each Vλ is a polynomial
GLn -module, and is irreducible if λ has at most n parts and is the zero module otherwise.
Furthermore, every irreducible polynomial GLn module is isomorphic to a module Vλ
for λ = (λ1 ≥ · · · ≥ λn ) with λi ∈ Z≥0 . Two modules Vλ and Vµ are isomorphic if and
only if λ = µ. This gives us a concrete way of realizing the irreducible representations
of GLn .
We will not prove these results in this work, however proofs can be found in [GW].
Example 1. A simple example is the decomposition of V ⊗2 :
M
V ⊗ V = Sym2 V
Alt2 V
where Sym2 V is the submodule of V ⊗V generated by all elements of the form u⊗v+v⊗u
for u, v ∈ V , and Alt2 V is the submodule of V ⊗ V generated by the elements
the
L of
2
2
form u ⊗ v − v ⊗ u for u, v ∈ V . While it is clear that V ⊗ V = Sym V
Alt V as
vector spaces, it is not immediately obvious that this is a decomposition into irreducible
submodules. This fact can be established directly. However, it follows cleanly from the
Schur-Weyl duality: S2 has two irreducible representations indexed by the corresponding
Young tableaux, so we have the decomposition into irreducibles
M
V ⊗ V = V(2)
V(1,1) ,
where S2 acts on V(2) via the trivial representation, and on V(1,1) via the sign representation.
Now,
V(2) = {x ∈ V ⊗ V |σ · x = x, ∀σ ∈ S2 }
and
V(1,1) = {x ∈ V ⊗ V |σ · x = sgn(σ)x, ∀σ ∈ S2 },
but these are precisely Sym2 V and Alt2 V. Thus, these modules are irreducible GLn
modules by the Schur-Weyl duality.
2.2. Characters of GLn .
Definition 2. We define the character of a representation ρ of G = GLn to be the
map:
χV : G → C
such that χV (g) = tr(ρ(g)) for all g ∈ G.
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
5
Since
χV (g) = tr(g) = tr(hgh−1 ) = χV (hgh−1 ),
the character χV is invariant under conjugation and hence is constant on the conjugacy
classes of G.
Since the diagonalizable matrices are dense in GLn , the values of χV when restricted
to the diagonal matrices completely determine the character χV (since all diagonalizable
matrices are, by definition, conjugate to a diagonal matrix, and characters are constant
on conjugacy classes). Furthermore, since the n diagonal entries completely determine
a diagonal matrix, we can consider the character χV to actually be a map
χV : (C× )n → C.
When we consider the representations Vλ of GLn as discussed in Section 2.1.1, it can
be shown that χVλ (x1 , . . . , xn ) = sλ (x1 , . . . , xn ), i.e. the characters of the representations Vλ of GLn are in fact the Schur polynomials sλ (x1 , . . . , xn ). We will discuss these
polynomials in the next section.
It is important to note that two representations are isomorphic if and only if their
respective characters are the same. Furthermore, we have
χU L V = χU + χV
and
χU ⊗V = χU χV ,
allowing us to work with the characters when examining direct sums and tensor products
of modules.
3. Symmetric Polynomials
In this section we will give an overview of symmetric and Schur polynomials, following
the notation of [F]. We begin our discussion of Schur polynomials by defining several
combinatorial objects.
Definition 3. We define a partition to be a weakly decreasing sequence of non-negative
integers λ = (λ1 ≥ λ2 ≥ · · · ≥ λt ). We say that the number of parts of the partition is
the largest integer t0 such that λt0 > 0, and set
t
X
|λ| :=
λi .
i=1
Definition 4. For a given partition λ = (λ1 ≥ · ·P
· ≥ λt ) we define the Young diagram
of shape λ to be a left-justified array of |λ| := ti=1 λi boxes with λi boxes in the ith
row.
We often identify the partitions λ and λ0 if their corresponding Young diagrams are
the same. In other words, we identify two partitions if their strictly positive parts are
the same, regardless of how many parts of size zero occur.
We now consider filling these boxes, which we will also refer to as cells, with numbers.
We define a filling of a Young diagram to be a numbering of the boxes using positive
integers. In particular, we are interested in fillings that satisfy certain properties.
6
DINUSHI MUNASINGHE
Definition 5. We say that a filling T of a Young diagram λ is a Young tableau if the
numbering is weakly increasing across the rows of λ and strictly increasing down the
columns. Given a Young tableau T , we say that T has content µ = (µ1 , . . . , µt ) ∈ Nt if
T contains µi i0 s, 1 ≤ i ≤ t.
For a Young tableau T of shape λ and content µ = (µ1 , . . . , µt ), we denote by xT the
monomial:
t
Y
xT :=
xµi i .
i=1
We then define the Schur polynomial in t variables as:
X
sλ (x1 , . . . , xt ) =
xT
T
where we sum over all possible tableaux T of shape λ filled with integers from 1 to t.
For ease of notation, we will denote
~xt := (x1 , . . . , xt ).
While not obvious from the definition, sλ (~xt ) is a symmetric polynomial in t variables
for any Young diagram of shape λ. Furthermore, letting λ vary over partitions of k having
at most t parts, the Schur polynomials form a basis for the symmetric polynomials in t
variables of degree k.
Definition 6. Let λ = (λ1 ≥ · · · ≥ λt ) be a partition of k having at most t strictly
positive parts. We define mλ (~xt ) to be the minimal symmetric polynomial generated by
the monomial xλ1 1 . . . xλt t .
3.1. Kostka Numbers.
Definition 7. We denote by kλµ (t) the number of possible Young tableaux of shape
λ = (λ1 ≥ · · · ≥ λt ) and content µ = (µ1 ≥ · · · ≥ µt ) (adjoining parts of size zero where
necessary).
We have the relation
sλ (~xt ) =
X
kλµ (t)mµ (~xt ).
µ
3.2. Littlewood-Richardson Coefficients. Given two partitions λ and µ and their respective Schur polynomials sλ (~xt ) and sµ (~xt ), we can define the Littlewood-Richardson
coefficients cνλµ (t) through the expansion
X
sλ (~xt )sµ (~xt ) =
cνλµ (t)sν (~xt ).
ν
That is to say, when one multiplies two Schur polynomials sλ (~xt ) and sµ (~xt ), the product,
being the product of two symmetric polynomials, is again a symmetric polynomial. Since
the Schur polynomials of degree |λ| + |µ| in t variables form a basis of the vector space
of symmetric polynomials of corresponding degree, we can rewrite the product as a sum
of Schur polynomials, with ν running over partitions of |λ| + |µ|. The LittlewoodRichardson coefficient, cνλµ (t) is then the coefficient of the Schur polynomial sν (~xt )
in the expansion.
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
7
3.3. Representations and Characters. The correspondence between characters of
representations of GLn and symmetric polynomials allows us to translate statements
from one setting to the other. For example,
X
sλ (~xt )sµ (~xt ) =
cνλµ (t)sν (~xt )
ν
gives us
Vλ ⊗ Vµ =
M
cνλµ (t)Vν .
Denoting
X
hs (x1 , . . . , xt ) :=
xi1 . . . xis
1≤i1 ···≤is ≤t
for a given s ∈ N, we define
hλ :=
t
Y
hλj (x1 , . . . , xt )
j=1
for a partition λ = (λ1 ≥ · · · ≥ λt ).
It can be shown that
hs (~xt ) = χV(s) = χSyms V .
Our definition of hλ then allows us to directly obtain
hλ = χSymλ1 V . . . χSymλt V .
The identity
hµ (~xt ) =
X
kλµ sλ (~xt )
λ
(see [F]) then gives us the result
M
Symµ1 V ⊗ · · · ⊗ Symµt V =
kλµ Vλ .
λ
4. Restrictions
4.1. Interlacing Partitions.
Definition 8. For given partitions λ and µ, where λ has t parts and µ has t − 1 parts,
we say that µ interlaces λ, denoted µ λ, if
λ1 ≥ µ1 ≥ λ2 ≥ µ2 ≥ · · · ≥ λt−1 ≥ µt−1 ≥ λt .
This concept will allow us to obtain an important relation of Schur polynomials.
Theorem 1.
sλ (~xt ) =
X
µλ
|λ|−|µ|
sµ (~xt−1 )xt
8
DINUSHI MUNASINGHE
Proof. For a given Young tableaux T of shape λ, we let Tt denote the object obtained
by removing all cells with the entry t, and mT the number of cells removed (that is to
say, the number of cells of T which contained the entry t). We claim that Tt is a Young
tableau of shape µ for some µ having r ≤ t − 1 parts and is filled with the entries from
[t − 1]. In fact, µ interlaces λ.
First, note that Tt is a Young diagram because removing cells with entry t can only
remove cells on the ends of rows. Furthermore, a cell with the entry t cannot lie above
an entry of lower value, because that would contradict the strict decrease down columns
of T. Thus, Tt is a Young diagram.
Given that we have a Young diagram with entries strictly increasing down columns
and weakly increasing across rows, it is clear that Tt is a Young tableau.
To see that µ interlaces λ, we first note that since µ was obtained by removing cells
from λ, µi ≤ λi ∀i. Furthermore, µi ≥ λi+1 . This follows because µi < λi+1 would imply
that in the initial Young tableau the ith row contained a t which lay above an entry less
than t in the row below, a contradiction. Thus, µ interlaces λ.
For any Young tableau T 0 of shape µ interlacing λ, we obtain a unique tableau T of
shape λ by filling in λ \ µ with t0 s. The result is again a tableau, as
λ1 ≥ µ1 ≥ λ2 ≥ µ2 ≥ · · · ≥ λt−1 ≥ µt−1 ≥ λt
ensures that the t0 s are placed below and to the right of all lower entries.
The theorem then follows immediately from the construction of the Schur polynomials
as
sλ (~xt ) =
X
xT .
T
4.2. Branching Rules. We now consider how a GLt -module Vλ restricts to GLt−1 ×
GL1 . It is natural to consider the embedding of GLt−1 × GL1 ⊂ GLt as matrices of the
form
A 0
0 c
where A ∈ GLt−1 and c ∈ C× = GL1 . It can be shown that for any irreducible polynomial
representation ρ0 of GLt−1 × GL1 , ρ0 can be written in the form Vµ × Cm for µ a partition
having s ≤ t−1 parts, and for m ∈ N, Cm the C× -module with c acting by multiplication
by cm . Note that the module Cm ∼
= C⊗m , where C denotes the natural representation of
GL1 and C−1 denotes the dual. We will continue using the notation Cm . We obtain the
decomposition of Vλ as a GLt−1 × GL1 -module into
M
Vλ ∼
rµm Vµ ⊗ Cm .
=
(µ,m)
To determine the coefficients rµm we consider the characters of the representations.
Since the left hand side is simply the representation Vλ , we know that the corresponding
character is the Schur polynomial sλ (~xt ). On the right hand side, given our additive and
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
9
multiplicative properties of characters, the corresponding character should be
X
rµm sµ (x1 , . . . , xt−1 )xm
t ,
(µ,m)
giving us the equation
sλ (x1 , . . . , xt ) =
X
rµm sµ (x1 , . . . , xt−1 )xm
t .
(µ,m)
On the other hand, as previously discussed,
X
|λ|−|µ|
sλ (x1 , . . . , xt ) =
sµ (x1 , . . . , xt−1 )xt
.
µλ
Since these two equations must agree, we have
X
X
|λ|−|µ|
sµ (x1 , . . . , xt−1 )xt
.
rµm sµ (x1 , . . . , xt−1 )xm
=
t
µλ
(µ,m)
Identifying coefficients and setting m = |λ| − |µ| gives
1 if µ interlaces λ
rµm =
0 otherwise
Moving back from characters to representations, we obtain the result that as
GLt−1 × GL1 -modules
M
Vµ ⊗ C|λ|−|µ| .
Vλ =
µλ
5. Beyond Polynomial Representations
For a partition λ = (λ1 ≥ · · · ≥ λn ), we can consider Vλ to be a representation of GLt
for t ≥ n by adding t − n parts of size zero to λ.
For λ = (λ1 ≥ · · · ≥ λt ) we now define
λ + ct := (λ1 + c, . . . , λt + c).
The corresponding Schur polynomial is
sλ+ct (~xt ) = (x1 . . . xt )c sλ (~xt ).
P
This easily follows given the construction of sλ (~xt ) as sλ (~xt ) = T xT , since adjoining
c cells to each row is equivalent to attaching a t × c rectangle to the left of the original
tableau. We can extend any filling of the original tableau of shape λ to a filling of the
tableau of shape λ + ct by filling the ith row of the t × c rectangle with i0 s. This is in
fact the only way that this rectangle can be filled, since the tableau must be strictly
increasing down columns. Thus,
sλ+ct (~xt ) = (x1 . . . xt )c sλ (~xt ).
We can now extend the concept of a Schur polynomial to weakly decreasing sequences
of integers
λ = (λ1 ≥ · · · ≥ λt )
10
DINUSHI MUNASINGHE
with λi ∈ Z. Letting d = λt we can use this shift to extend the definition of the Schur
polynomial to the case when λ is an arbitrary sequence of weakly decreasing integers via
sλ+dt (~xt )
sλ (~xt ) =
.
(x1 . . . xt )d
P
Extending mλ (~xt ) similarly, we can easily verify that sλ (~xt ) = µ kλµ mµ (~xt ) continues
to hold by setting kλ+ct ,µ+ct = kλµ .
We now extend the definition of a representation of GLn to include
ρ : G → GL(V ) ∼
= GLN ,
where for A = (akl ) ∈ GLn
ρ(A) = (ρij (A))
where the ρij are polynomials in the entries of A and the inverse of its determinant,
det−1 (A). In this setting, the relation
X
sµ (x1 , . . . , xn−1 )x|λ|−|µ|
sλ (x1 , . . . , xn ) =
n
µλ
still holds, since detA = (x1 . . . xn ) allows us to use our shift of representations to use
the identity for partitions and shift back. Thus, we still have
M
Vλ =
Vµ ⊗ Ct
µ,t
Example 2. An important example to consider is GLn acting on V = Cn . This corresponds to the partition
λ = (1, 0, . . . , 0).
V has standard basis e1 , . . . , en , so we have
GLn → GLn
ρ=
A 7→ A
Example 3. The dual module V ∗ is also a representation. V ∗ has dual basis e∗1 , . . . , e∗n
where ei (ej ) = δij for δij the Kronecker delta. We obtain the dual representation
GLn → GLn
∗
ρ :
A 7→ (A−1 )>
Now, A−1 can be written as
1
A0
detA
where A0 , the adjugate of A, is a matrix with entries that are polynomials in the entries
of A. Thus, this representation is a matrix with entries that are polynomials in the
entries of A and det−1 A. The character of the dual representation, χV ∗ (~xn ), will be
n
X
1
= m(−1) (~xn ).
x
i=1 i
Consider now the representation corresponding to the partition λ = (0, 0, . . . , 0, −1).
Shifting this partition, we obtain the character
1
s(1,1,...,1,0) (~xn ) = m(−1) (~xn ).
sλ (~xn ) =
x 1 . . . xn
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
11
Thus, by comparing characters, we see that the dual representation corresponds to the
partition λ = (0, . . . , 0, −1).
6. Weight Decompositions
Let H ⊂ GLn denote the subgroup of diagonal matrices. We define the H-module Cµ
to be the one-dimensional vector space C with H acting on C by
diag(z1 , . . . , zn ) · v = z1µ1 . . . znµn v
for all v ∈ C.
Definition 9. Let ρ : GLn → GL(V ) ∼
= GLN be a representation. We refer to v ∈ V
as a weight vector with weight µ = (µ1 , . . . , µn ) ∈ Zn if
z · v = z1µ1 . . . znµn v
for all z = diag(z1 , . . . , zn ) ∈ H.
We define a weight space V µ ⊂ V of a representation V to be the subspace of all
vectors in V that are weight vectors with weight µ.
One can show that every representation of GLn decomposes into the direct sum of its
weight spaces
M
M
(dim V µ )Cµ .
Vµ =
V =
µ
µ
Moreover, dim V µ = dim V σ(µ) for any σ ∈ Sn . Moving to the character of the irreducible
representation Vλ , we obtain the expansion
X
kλµ mµ (~xn ).
sλ (~xn ) =
µ
Comparing these we are able to conclude that
dim Vλµ = kλµ .
7. Some Notation
So far, we have only considered the Schur polynomials as polynomials in a finite
number of variables. However, we can easily extend the concept to infinitely many
variables by removing the restriction on the Young tableaux T to contain only integers
from 1 to some finite t. Through our previous definition of the Schur polynomials, we
then obtain
X
sλ = sλ (x1 , x2 , x3 , . . . ) =
xT .
T
For partitions λ,
sλ (x1 , . . . , xt , 0, 0, 0, . . . ) = sλ (x1 , . . . , xt )
for all t ∈ N, t > 0. Notationally, we will use mλ := mλ (x1 , x2 , x3 , . . . ) to refer to the
polynomial in infinitely many variables generated by the monomial xλ .
For example,
s(1) (~xt ) = x1 + · · · + xt = m(1) (~xt )
12
DINUSHI MUNASINGHE
becomes
s(1) =
X
xi = m(1) .
i≥1
Similarly,
s(2) (~xt ) =
X
X
x2i +
1≤i≤t
xi xj = m(2) (~xt ) + m(1,1) (~xt )
1≤i<j≤t
becomes
s(2) =
X
i≥1
X
x2i +
xi xj = m(2) + m(1,1) .
1≤i<j
This summation notation quickly becomes cumbersome, so we will simply use the symmetric polynomials mγ in expansions of Schur polynomials.
For example,
m(1,1) (~xt ) =
X
xi xj
in three variables gives the three terms
m(1,1) (x1 , x2 , x3 ) = x1 x2 + x1 x3 + x2 x3 ,
while
m(2,1) (~xt ) =
X
x2i xj
in three variables gives
m(2,1) (x1 , x2 , x3 ) = x21 x2 + x1 x22 + x22 x3 + x2 x23 + x21 x3 + x1 x23 ,
which has six terms.
8. Stability
8.1. Kostka Numbers. We begin our study of stability with the Kostka numbers.
Specifically, we want to know if there a value t0 such that for partitions λ and µ having
t > t0 parts,
kλµ (t) = kλµ (t0 ).
Again, we look at characters
X
sλ (~xt ) =
kλµ (t)mµ (~xt )
µ
corresponding to
Vλ =
M
kλµ Vλµ .
λ,µ
The Kostka numbers demonstrate stability as we let t go to infinity. We recognize that
for the partitions µ = µ1 ≥ · · · ≥ µt , we have
xµ = xµ1 1 . . . xµt t ,
but as µk = 0 for sufficiently large k, xµk k = 1 for these k. Thus the multiplicity of the
monomials xµ1 1 . . . xµt t , namely the Kostka numbers kλµ (t) will stabilize for sufficiently
large t. This stability can also be seen by looking at the Kostka numbers kλµ as the
number of Young tableaux of shape λ and content µ: as the number of parts go to
infinity, the number of possible tableaux stabilizes, as adding parts of size zero to both
λ and µ will eventually not create any new Young tableaux of shape λ and content µ.
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
13
8.2. Littlewood-Richardson Coefficients. Having established the stability of the
Kostka numbers for standard partitions we move to that of the Littlewood-Richardson
coefficients, maintaining the restriction of λ, µ, and ν to be partitions. As previously
discussed, for λ a partition having n strictly positive parts, we can view Vλ as a GLt module for any t ≥ n by adding t − n parts of size zero to λ. The definition of the
Littlewood-Richardson coefficients as the coefficients in the expansion of the product
sλ (~xt )sµ (~xt ) is equivalent to
M
Vλ ⊗ Vµ =
cνλµ Vν .
However, this decomposition of the tensor product of representations of GLt depends
on t. It is natural to then ask for what values of t the formula is valid. We begin by
considering an example.
Example 4. In terms of the bases of symmetric polynomials given by Schur polynomials,
we have
X
cν(2)(2) sν
s(2) s(2) =
ν
where we sum over partitions of 4. In terms of the corresponding representations, this
becomes
M
V(2) ⊗ V(2) =
cν(2)(2) Vν .
The partitions of 4 are (4), (3, 1), (2, 2), (2, 1, 1) and (1, 1, 1, 1). However, (1, 1, 1, 1), having 4 parts, does not correspond to a valid irreducible representation of GL3 . This can
be seen by moving to the corresponding characters. Considering the Schur polynomials
in an indeterminate number of variables we have:
s(2) (x1 , . . . , xt ) = x21 + · · · + x2t + x1 x2 + . . . xt−1 xt = m(2) + m(1,1) ,
so that
P
P
P
P
P 4
s(2) s(2) =
xi + 2 x3i xj + 3 x2i x2j + 4 x2i xj xk + 6 xi xj xk xl
= m(4) + 2m(3,1) + 3m(2,2) + 4m(2,1,1) + 6m(1,1,1,1) .
P
However, if we let n = 3, the final sum,
xi xj xk xl = m(1,1,1,1) disappears, since there
are no collections of four distinct indices. We now find the corresponding LittlewoodRichardson coefficients. Corresponding to the partitions of 4, we have the Schur functions:
P 4 P 3
P
P
P
s(4)
=
xi + xi xj + x2i x2j + x2i xj xk + xi xj xk xl
= m(4) + m(3,1) + m(2,2) + m(2,1,1) + m(1,1,1,1) ,
s(3,1)
P 3
P
P
P
=
xi xj + x2i x2j + 2 x2i xj xk + 3 xi xj xk xl
= m(3,1) + m(2,2) + 2m(2,1,1) + 3m(1,1,1,1) ,
s(2,2)
P 2 2 P 2
P
=
xi xj + xi xj xk + 2 xi xj xk xl
= m(2,2) + m(2,1,1) + 2m(1,1,1,1) ,
s(2,1,1) =
P
x2i xj xk + 3
P
xi xj xk xl = m(2,1,1) + 3m(1,1,1,1)
and
s(1,1,1,1) =
P
xi xj xk xl = m(1,1,1,1) .
14
DINUSHI MUNASINGHE
We see that
P
P
P
P
P 4
s(2) s(2) =
xi + 2 x3i xj + 3 x2i x2j + 4 x2i xj xk + 6 xi xj xk xl
= m(4) + 2m(3,1) + 3m(2,2) + 4m(2,1,1) + 6m(1,1,1,1)
= s(4) + s(3,1) + s(2,2) .
Thus, we have the Littlewood-Richardson coefficients:
(4)
c(2)(2) = 1,
(3,1)
c(2)(2) = 1,
(2,2)
c(2)(2) = 1,
(2,1,1)
c(2)(2) = 0,
(1,1,1,1)
c((2)(2) = 0.
We now examine the general situation for partitions having only non-negative parts.
For clarity, we denote by cνλµ (t) the Littlewood-Richardson coefficient for partitions having t parts. To facilitate an inductive argument, we will use the lexicographic order on
partitions: we say that λ µ if there exists i such that λi > µi where λj = µj for all
j < i.
Theorem 2. For any partitions λ = (λ1 ≥ · · · ≥ λt ), µ = (µ1 ≥ · · · ≥ µt ), and
ν = (ν1 ≥ · · · ≥ νt ) there exists a positive integer t0 such that for t ≥ t0 , cνλµ (t) = cνλµ (t0 ).
That is to say, when increasing the number of variables in the Schur polynomials,
or equivalently, the number of parts in the corresponding partitions, the LittlewoodRichardson coefficients stabilize. We will henceforth reserve cνλµ for this stable asymptotic
value.
Proof. We claim that there exists t0 ∈ N such that for t > t0 , cνλµ (t) = cνλµ (t0 ). This can
be seen by looking at
X
sλ (~xt )sµ (~xt ) =
cνλµ (t)sν (~xt ).
As t → ∞, we know that the Kostka numbers kλµ (t) stabilize for standard partitions.
Using the expansion of the Schur polynomials as
X
kλτ (t)mτ (~xt )
sλ (~xt ) =
τ
we obtain
X
X
X
X
(
kλτ (t)mτ (~xt ))(
kµτ 0 (t)mτ 0 (~xt )) =
cνλµ (t)
kντ 00 (t)mτ 00 (~xt ).
τ
τ0
ν
τ 00
Since the Kostka numbers stabilize, the left hand side of the equation stabilizes. We use
an inductive argument based on the lexicographic order for the right hand side.
We begin by comparing the highest terms in the expansion
X
sλ (~xt )sµ (~xt ) =
cνλµ (t)sν (~xt ).
ν
Since
sλ (~xt ) = mλ (~xt ) +
X
kλτ mτ (~xt ),
τ ≺λ
and similarly
sµ (~xt ) = mµ (~xt ) +
X
τ 0 ≺µ
kµτ 0 mτ 0 (~xt ),
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
15
the highest term on the left hand side is xλ+µ . On the other hand, the highest term on
λ+µ
the right hand side is cλ+µ
. This shows us that
λµ x
cλ+µ
λµ (t) = 1
proving stability for the term of highest lexicographic order in the expansion. We now
assume that stability holds for any ν0 ν. We begin with the expansion of the sums as
X
X
X
(
kλτ mτ (~xt ))(
kµτ 0 mτ 0 (~xt )) =
cνλµ (t)sν (~xt ).
τ0
τ
ν
We can rearrange these sums as
X
X
X
X
cνλµ (t)sν (~xt ).
(
kλτ mτ (~xt ))(
kµτ 0 mτ 0 (~xt )) −
cνλµ (t)sν (~xt ) = cνλµ0 (t)sν0 (~xt ) +
τ
τ0
ν≺ν0
νν0
We now compare coefficients in front of the terms x
ν0
on both sides, using the expansion
X
X
X
X
X
kλτ mτ (~xt ))(
kµτ 0 mτ 0 (~x))−
cνλµ
(
kντ 00 mτ 00 (~xt ) = cνλµ0 (t)sν0 (~xt )+
cνλµ (t)sν (~xt ).
τ
τ0
νν0
τ 00
ν≺ν0
On the left hand side, this coefficient is a combination of stable coefficients, hence is
itself stable. The coefficient on the right hand side is cνλµ0 (t), establishing the result. 9. Asymptotics
We now begin our study of sequences of the form λ = (λ+ , λ− ). We construct these
−
+
−
sequences by taking two partitions λ+ = (λ+
1 ≥ · · · ≥ λr ) and (λ− = λ1 ≥ · · · ≥ λs ),
joining them together after negating and reversing the order of the parts of λ− to obtain
+
−
−
λt = (λ+
1 ≥ . . . λr ≥ 0 · · · ≥ 0 ≥ −λs ≥ · · · ≥ −λ1 ).
Throughout our discussion, we assume that the total number of parts, or terms, in λ,
including parts of size 0, is t. When we discuss asymptotic behaviour, we will be allowing
t to grow large by inserting parts of size zero.
We will denote by
sλ (~xt ) := sλt (~xt ) = s(λ+1 ,...,λ+r ,0,...,0,−λ−s ,...,−λ−1 ) (~xt )
the Schur polynomial corresponding to the partition λ. For example,
s(1,−1) (~xt ) = m(1,−1) (~xt ) + (t − 1).
We now compute another example.
Example 5. We note that
s(1,−2)
s(−1) = m(−1) ,
= m(1,−2) + m(1,−1,−1) + (t − 1)m(−1) ,
and
s(1,−1,−1) = m(1,−1,−1) + (t − 2)m(−1) .
Then
s(−1) s(1,−1) = m(−1) (m(1,−1) + (t − 1))
= m(1,−2) + m(−1) + 2(t − 1)m(−1) + 2m(1,−1,−1)
= s(1,−2) + s(1,−1,−1) + s(−1) .
16
DINUSHI MUNASINGHE
As a corollary to Theorem 2.3 in [PS], we obtain that for given partitions λ = (λ+ , 0)
and µ = (0, µ− )
s(λ+ ,∅) s(∅,µ− ) =
X
cν(λ+ ,∅)(∅,µ− ) sν
ν
where ν = (ν+ , ν− ) and for r = |λ+ | − |ν+ | = |µ− | − |ν− |,
X
cν(λ+ ,∅)(∅,µ− ) =
cλν++γ cµν−−γ .
|γ|=r
(λ ,µ )
−
Firstly, we note that c(λ++ ,0)(0,µ
= 1, since ν = (λ+ , µ− ) gives |λ+ | − |ν+ | = 0 which
−)
λ
implies that the only summand is the one corresponding to γ = ∅, for which cλ++ ∅ =
µ
cµ−− ∅ = 1. Secondly, for any ν = (ν+ , ν− ) 6= (λ+ , µ− ), |ν+ | ≥ |λ+ | or |ν− | ≥ |µ− | implies
that cν(λ+ ,∅)(∅,µ− ) = 0. We then have
X
s(λ+ ,∅) s(∅,µ− ) = s(λ+ ,µ− ) +
cν(λ+ ,∅)(∅,µ− ) sν .
ν
Having established that the generalized Littlewood-Richardson coefficients cν(λ+ ,∅)(∅,µ− )
can be written as a combination of Littlewood-Richardson coefficients for standard partitions, we are able to infer their stability. 1
In the next section, we will use this new stability to deduce the polynomial behaviour
of the Kostka numbers.
10. The Polynomial Behaviour of Kostka Numbers
In this section, we will prove that kλµ (t) is a polynomial in t, where λ = (λ+ , λ− ) and
µ = (µ+ , µ− ). Furthermore, as an integer valued polynomial, kλµ (t) can be written as a
polynomial in the binomial basis for integer valued polynomials,
t+i
{
|i ≥ 0}
i
in addition to the usual basis of polynomials {ti |i ≥ 0}. Our first proof of the polynomial
behaviour of Kostka numbers will rely on the results from [PS]. We begin by proving a
lemma.
Lemma 1. Let p(~xt ) and q(~xt ) be symmetric Laurent polynomials in t variables with
coefficients that are polynomials in t. Then the product p(~xt )q(~xt ) also has coefficients
that are polynomials in t.
Proof. We say that p(~xt ) involves at most
Pk variables if, when expanded as a sum of
symmetric polynomials mγ , i.e. p(~xt ) =
rγ mγ (~xt ), each γ has at most k parts. We
say that q(~xt ) involves s variables and induct on the total k + s. The base case is clear,
as if k + s = 0, we simply have the product 1 · 1 = 1.
1No
formula for general cνλµ can be derived directly from [PS], but these coefficients likely also
stabilize. We will not, however, address this question here.
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
17
We now assume that the statement holds for all p0 (~xt ) involving k 0 variables and q 0 (~xt )
involving s0 variables such that k 0 +s0 < k +s. On one hand, as the product of symmetric
polynomials, we have
X
p(~xt )q(~xt ) =
aγ (t)mγ (~xt ),
γ
where aγ (t) is some function. On the other hand, by sorting by the powers of xt , we
write
X
X
p(~xt ) =
pi (~xt−1 )xi , q(~xt ) =
qj (~xt−1 )xj
i
j
where each pi (~xt−1 ) and each qj (~xt−1 ) is a symmetric Laurent polynomial in x1 , . . . , xt−1 .
Since the inductive assumption applies for any pair pi , qj with (i, j) 6= (0, 0), we have
P
P
p(~xt )q(~xt ) = ( i pi (~xt−1 )xit )( j qj (~xt−1 )xjt )
P
= p0 (~xt−1 )q0 (~xt−1 ) + (i,j)6=(0,0) pi (~xt−1 )qj (~xt−1 )xti+j
P P
= p0 (~xt−1 )q0 (~xt−1 ) + d ( β aβ,d (t − 1)mβ (~xt−1 ))xdt .
So,
(3)
X
aγ (t)mγ (~xt ) = p0 (~xt−1 )q0 (~xt−1 ) +
γ
XX
(
aβ,d (t − 1)mβ (~xt−1 ))xdt .
d
β
P P
If γ = (γ1 , . . . , γt ) 6= ∅, say γt 6= 0, then d ( β aβ,d (t − 1)mβ (~xt−1 )xdt contributes
to ~xγt , so aγ (t) = aγ,γt (t − 1) for γ = (γ1 , . . . , γt−1 ), hence is a polynomial in t by the
inductive hypothesis. If γ = ∅, comparing coefficients of x∅ , i.e. comparing constant
coefficients on either side of (3) we obtain
a∅ (t) = a∅ (t − 1) + a∅ (t)
⇒ a∅ (t) − a∅ (t − 1) = a∅ (t),
giving a difference equation whose solution is a polynomial in t, see Appendix. This
gives us that a∅ (t) is also a polynomial, and hence that the entire product p(~xt )q(~xt ) has
polynomial coefficients, as desired.
Theorem 3. kλµ (t) is a polynomial in t.
Proof. We begin by noting that we extend the lexicographic order to an order between
pairs µ = (µ+ , µ− ) and ν = (ν+ , ν− ) by saying that µ ν if µ+ ν+ and µ− ν− . We will establish the polynomial behaviour using the stability of the LittlewoodRichardson coefficients and induction on the above order. For the base case, s∅ (~xt ) = 1
has coefficients that are polynomials in t, namely, all are the zero polynomial except for
k∅∅ = 1. We now assume that sν 0 (~xt ) is polynomial - meaning it has coefficients that are
polynomials in t - for all ν 0 ≺ ν = (ν+ , ν− ). Consider
X
s(ν+ ,∅) (~xt )s(∅,ν− ) (~xt ) =
cνλµ sν (~xt ).
On the right hand side, we have that ν = (ν+ , ν− ) is the largest element. We can
therefore rearrange the equation to obtain
X 0
s(ν+ ,∅) (~xt )s(∅,ν− ) (~xt ) −
cνλµ sν 0 (~xt ) = sν (~xt ).
ν 0 ≺ν
18
DINUSHI MUNASINGHE
By Lemma 1 and the inductive hypothesis, all Schur polynomials on the left hand
side of the equation are polynomial, as is the product sν+ (~xt )sν− (~xt ). Moreover, the
Littlewood-Richardson coefficients are stable. Therefore, sν (~xt ) is also polynomial, establishing the result.
Neither Theorem 3 nor its proof actually provide much information about the polynomial kλµ (t). Approaching the problem through branching rules allows us to determine
both the degree and leading coefficient of the Kostka polynomial kλmu (t).
Theorem 4. The degree of kλµ (t) is d = |λ+ | − |µ+ | = |λ− | − |µ− |. Furthermore,
the leading coefficient of kλµ (t) is d!1 kλ+ µ̃+ kλ− µ̃− , where µ̃+ is obtained from µ+ by adjoining |λ+ | − |µ+ | single cells to the corresponding Young diagram, that is to say,
µ̃+ = (µ+ , 1, . . . , 1), (and similarly for µ̃− ).
Proof. In order to prove this result, we must first establish a recurrence relation for
kλµ (t). We will do this using the branching rules as discussed in Section 4.2.
Recall that we have the decomposition
M
Vν ⊗ C|λ|−|ν|
Vλ =
νλ
which corresponds to the relation of characters
X
|λ|−|ν|
sλt+1 (~xt+1 ) =
sνt (~xt )xt+1 .
νt λt+1
Expanding the Schur polynomials in terms of the Kostka numbers gives
X
X X
|λ|−|ν|
kλµ (t + 1)mµ (~xt+1 ) =
(
kνµ (t)mµ (~xt ))xt+1 .
νt λt+1
µ
µ
We now compare coefficients of the terms
µ
t+1
~xµt+1 = xµ1 1 . . . xt+1
on either side of the above equation. To simplify our induction, we look at the terms
where µt+1 = 0, that is to say, we are really comparing coefficients for terms of the form
xµ1 1 . . . xµt t .
This gives us
kλµ (t + 1) =
X
kνµ (t).
νt λt+1 ,|λ|=|ν|
We note that λt λt+1 . Denoting at = kλµ (t), we have
X
kλµ (t + 1) = kλµ (t) +
kνµ (t)
νt λt+1 ,|λ|=|ν|,ν6=λ
so
X
at+1 = at +
kνµ (t)
νt λt+1 ,|λ|=|µ|,ν6=λ
(4)
⇒ at+1 − at =
X
νt λt+1 ,|λ|=|ν|,ν6=λ
kνµ (t).
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
19
We will now induct on d = |λ+ | − |µ+ | to prove that kλµ (t) is of degree d with leading
coefficient d!1 kλ+ µ̃+ kλ− µ̃− .
For the base case, we assume |λ+ | = |µ+ |. We note that the right hand side of (4) is
then zero, as for any ν such that νt λt+1 where ν 6= λ,
|ν+ | < |λ+ | = |µ+ | ⇒ kνµ (t) = 0.
Thus, at+1 − at = 0, so at is a constant. To determine this constant, we note that
|λ+ | = |µ+ | and |λ− | = |µ− |, so by looking at the corresponding Young diagrams with
the appropriate shifts, since the positive and negative portions of the Young diagram
are forced to be filled independently, we obtain kλµ (t) = kλ+ µ+ kλ− µ− , establishing the
reuslt.
We now assume that the result holds for ν = (ν+ , ν− ) such that |ν+ | − |µ+ | < d. From
(4), the right hand side is the sum of polynomials of degrees |ν+ | − |µ+ | < d. To see
that the degree of the sum is, in fact, exactly d − 1, we note that sequences of the form
ν = (ν+ , ν− ) obtained by removing one cell each from suitable locations of λ+ and λ− ,
i.e. at the end of a row that does not conflict with the weakly decreasing lengths of the
rows, will interlace λ, and will satisfy |ν+ | = |λ+ | − 1. Each of the sequences of this form
will give a Kostka polynomial kνµ (t) of degree d − 1 and positive leading coefficient, so
no cancellation will occur. Thus, the degree of the right hand side of (4) is exactly d − 1,
so Theorem A implies that the degree of kλµ (t) is d.
The leading coefficient of at = kλµ (t) (in the expansion in the binomial basis) will be
the sum of all leading coefficients on the right hand side of (4) such that |λ+ | − |ν+ | = 1.
In other words, it is equal to the sum
X
(5)
Cν
|λ+ |−|ν+ |=1
where Cν denotes the leading coefficient of the Kostka polynomial kνµ (t). By the inductive hypothesis, we have
Cν = kν+ µ̃+ kν− µ̃− ,
which combined with (5) implies that the leading coefficient of kλµ (t) equals
X
X
X
(6)
kν+ µ̃+ kν− µ̃− = (
kν+ µ̃+ )(
kν− µ̃− )
|λ+ |−|ν+ |=|λ− |−|ν− |=1
|λ+ |−|ν+ |=1
|λ− |−|ν− |=1
To compute the sums on the right hand side of (6), notice that for standard partitions
τ and η, with the definition of the Kostka number kτ η as the number of possible Young
tableaux of shape τ and content η = (η1 , . . . , ηt ),
X
(7)
kτ η =
kτ 0 η
τ0
where τ 0 runs over all Young diagrams obtained from τ by removing one cell, and η =
η − (0, . . . , 0, 1) (assuming that ηt > 0). This follows because, for each Young tableau
of shape τ and content η, removing a cell containing the entry t creates a new Young
tableau (following much the same argument as in Section 3.1), and given the collection
of all possible Young tableaux obtained by the removal of a single cell containing the
entry t, we can reconstruct a unique “parent” Young tableau.
20
DINUSHI MUNASINGHE
Rewriting (7) as
kτ η =
X
kτ 0 η
|τ |−|τ 0 |=1
and combining this with (5) and (6), we conclude that the leading coefficient of kλµ (t)
in the binomial basis is
kλ+ µ̃+ kλ− µ̃− .
11. Further Asymptotics
After proving that the Kostka numbers do exhibit polynomial behaviour, a natural
next step is to try to determine what these polynomials actually are. Given partitions
λ = (λ+ , λ− ) and µ = (µ+ , µ− ), is it possible to determine the Kostka polynomial kλµ (t)?
In section 10 we were able to determine the degree and leading coefficient of kλµ (t),
however, at this time no general formula for determining kλµ (t) explicitly is known. We
can, however, establish formulas for two simple cases.
Theorem 5.
t+s−2
k(s,−s),∅ (t) =
s
First Proof. This statement follows from the fact that
k(s,−s),∅ (t) = k(2s,st−2 ),st
where k(2s,st−2 ),st denotes the regular Kostka number obtained by simply counting the
number of Young tableaux of shape (2s, st−2 ) and content st .
In the first row of (s, −s) + st , we have no choice but to fill the first s spots with 10 s.
We then have an arbitrary choice with repetition for s elements out of 2, . . . , t for the
remaining s spots in the first row. These choices completely determine the tableau, as
the t − 2 × s block can then only be filled in one way. Thus, by the formula for choosing
s elements with repetition out of t − 1 possible choices, we have that
(t − 1) + (s − 1)
t+s−2
k(s,−s),∅ (t) =
=
s
s
as desired. From this formula, we can see that k(s,−s),∅ (t) is a polynomial in t for fixed
s of degree s and leading coefficient s!1 , as expected from Theorem 4.
Second Proof. We can also obtain this result from branching rules and induction. Since
k(s,−s)∅ (t + 1) = k(s,−s)∅ (t) +
s−1
X
k(j,−j)∅ (t),
j=0
and
k(s+1,−(s+1))∅ (t + 1) = k(s+1,−(s+1))∅ (t) + k(s,−s)∅ (t) +
s−1
X
j=0
k(j,−j)∅ (t),
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
21
we obtain
k(s+1,−(s+1))∅ (t + 1) = k(s+1,−(s+1))∅ (t) + k(s,−s)∅ (t + 1).
Defining
p(t, s) := k(s,−s)∅ (t)
we thus obtain the recurrence relation with boundary conditions

 p(t + 1, s + 1) = p(t, s + 1) + p(t + 1, s)
p(t, 0) = 1

p(2, s) = 1
as
p(2, s) = k(s,−s)∅ (2) = k(2s,0)(s,s) = 1.
We claim that p(t, s) = t+s−2
. Clearly,
s
t−2
= 1 = p(t, 0),
0
and
2+s−2
s
=
= 1 = p(2, s).
s
s
We now induct on t + s, assuming that the statement holds for all t0 and s0 such that
t0 + s0 < t + s. By the recurrence relation,
p(t, s) = p(t − 1, s) + p(t, s − 1)
which, by the inductive hypothesis, is
t−1+s−2
t+s−1−2
p(t, s) =
+
s
s−1
t+s−3
t+s−3
⇒ p(t, s) =
+
.
s
s−1
n
Using the relation n+1
= r+1
+ nr , we obtain
r+1
t+s−2
p(t, s) =
,
s
the desired result.
We now look at λ = (1s , −1s ) and µ = ∅. To begin our discussion, we introduce the
notion of a Dyck path:
Definition 10. A Dyck path is a path of steps of length 1 in the (x, y) − plane from
the point (0, 0) to (s, s), where each step goes either directly up or to the right and does
not cross the line x = y. The number of Dyck paths from (0, 0) to (s, s) is known to be
the sth Catalan number,
1
2s
Cs =
s+1 s
(see [W]).
22
DINUSHI MUNASINGHE
Theorem 6.
k(1s ,−1s ),∅ (t) =
X t
t − (2i + 1)
−
Ci
.
t−s
t
−
s
−
i
i≥0
Proof. To determine k(1s ,−1s ),∅ (t), we will look at k(2s ,1t−2s ),1t (t). Column one of (1s , −1s )+
1t = (2s , 1t−2s ) has t − s spots, and column two has s. Choosing the elements in the
first column completely determines the Young tableau, as the second
column will then
t
be forced to be filled in increasing order. In total, there are t−s
ways of choosing the
first column. However, not all of these choices will allow for a valid Young tableau. We
must therefore subtract from the total number of possibilities the number of fillings that
are not valid tableaux. Let ai denote the number in row i + 1 in the first column. Thus
each filling directly corresponds to a (t − s)-tuple (a0 , . . . , at−s−1 ).
We note that ai < 2(i + 1) for 1 ≤ i ≤ s is a necessary and sufficient condition for the
corresponding filling to be a Young tableau. Thus, from the total number of choices for
the first column, we must subtract all (t−s)-tuples which contradict the above condition.
It is clear that there are t−1
ways to fill the first column with a0 > 1, i.e. with a
t−s
t
contradiction in the first row. Thus we begin by subtracting t−1
from the total t−s
.
t−s
t−(2i+1)
In general, we claim that there are Ci t−s−i possible fillings that do not contradict
aj < 2j + 2 for 0 ≤ j ≤ i − 1, but contradict for the first time at ai , in the i + 1st row.
Firstly, there are Ci ways to construct a valid filling for the first i rows. We establish
this by filling the i × 2 rectangle with the numbers 1, . . . , 2i in order. For a placement
in the left of the two columns, we add a step to the right to our Dyck path, and for
a placement in the right column we assign a vertical step. The condition aj < 2j + 2
corresponds to the condition that the Dyck path does not cross the line x = y. Thus, for
every valid filling of the aforementioned rectangle, we have a corresponding Dyck path
and vice versa, so the number of valid i × 2 rectangular tableaux is Ci .
To introduce a contradiction at the i + 1st row, i.e. at ai , we must fill the remaining
t − s − i spots with choices from the t − (2i + 1) elements which would give the desired
contradiction. Subtracting all possible invalid fillings by running over the t − s rows, we
obtain the result:
X t − (2i + 1)
t
.
−
Ci
t−s−i
t−s
i≥0
k(1s ,−1s ),∅ (t) =
We can also prove a simpler formula for this Kostka polynomial.
Theorem 7.
t+1
t
k(1s ,−1s )∅ (t) =
−2
.
s
s−1
Proof. By branching rules,
k(1s ,−1s )∅ (t + 1) = k(1s ,−1s )∅ (t) + k(1s−1 ,−1s−1 )∅ (t).
We define
q(t, s) := k(1s ,−1s )∅ (t),
POLYNOMIAL BEHAVIOUR OF KOSTKA NUMBERS
23
so that the branching rule and our argument above gives us the recurrence relation with
boundary conditions

 q(t + 1, s) = q(t, s) + q(t, s − 1)
q(t, 0) = 1

q(2s, s) = Cs .
We note that the variables are constrained by s ≥ 0 and t ≥ 2s.
Now, by another straightforward inductive argument, we show that
t+1
t
q(t, s) =
−2
.
s
s−1
It is clear that the base cases are satisfied, as
t+1
t
q(t, 0) = 1 =
−2
0
−1
and
2s
2s
2s + 1
2s
q(2s, s) =
−2
=
−
s
s−1
s
s−1
(2s)!
(2s)!
(s + 1)(2s)! − s(2s)!
⇒ q(2s, s) =
−
=
s!s!
(s − 1)!(s + 1)!
s!(s + 1)!
(2s)!
⇒ q(2s, s) =
= Cs .
s!(s + 1)!
Now, assuming that the statement holds for all t0 and s0 such that t0 + s” < t + s, we
use the recurrence relation to obtain
t
t−1
t
t−1
q(t, s) = q(t − 1, s) + q(t − 1, s − 1) =
−2
+
−2
s
s−1
s−1
s−2
t+1
t
⇒ q(t, s) =
−2
s
s−1
as desired.
12. Remarks
The Kostka numbers k(s,−s)∅ (t) and k(1s ,−1s )∅ (t) discussed above are examples of asymptotic behaviour involving two variables, a more general situation than that discussed
in the majority of this project, which only involved the variable t.
We notice that k(s,−s)∅ (t) is a polynomial in t for fixed s, and is a polynomial in s for
fixed t. Furthermore, it can be expressed by a single binomial coefficient.
The Kostka number k(1s ,−1s )∅ (t) on the other hand, while a polynomial in t when s is
fixed, is only a polynomial in s for fixed t − s. It is, however, still a combination of only
two binomial coefficients.
A very general problem to study would be to examine Kostka functions
p(s1 , . . . , sk , t1 , . . . , tl ) = k(λs1 ,...,λsk )(µt1 ,...,µtl ) ,
1
k
1
l
where λ = (λ1 , . . . , λk ) and µ = (µ1 , . . . , µl ) are weakly decreasing sequences of integers,
with the hope that p can be expressed as a finite combination of multinomial coefficients.
24
DINUSHI MUNASINGHE
Appendix: Solving Difference Equations
We record a statement used repeatedly in the text.
Theorem A. Let
p(t) =
d
X
k=0
αk
t+k
,
k
and let at be a sequence such that at+1 − at = p(t) ∀t ≥ 0. Then
d+1
X
t+k
at = C +
(αk−1 − αk )
k
k=0
where αd+1 = 0 and C is some constant. In particular, at is a polynomial in t of degree
pd
where d = deg(p) and pd is the leading coefficient of
d + 1 and leading coefficient d+1
p(t).
Proof. We first note that if this difference equation has a solution it is unique up to a
constant. This is easily seen because if
bt+1 − bt = at+1 − at ,
then
bt+1 − at+1 = bt − at ∀t
so bt − at = C is constant. Consider
X
d+1
d
X
t+k
t+k+1
t+k
bt =
(αk−1 − αk )
=
αk (
−
).
k
k+1
k
k=1
k=0
Then
Pd
αk ( t+k+2
bt+1 − bt =
− t+k+1
) − αk ( t+k+1
− t+k
)
k=0
k+1
k
k+1
k
Pd
t+k+1
t+k+1
t+k+1
t+k+1
αk ( k+1 +
−
− k+1 +
=
k
k
Pk=0
d
t+k
= p(t).
=
k=0 αk k
t+k
k
)
References
[BKLS] G. Benkart, S.-J. Kang, H. Lee, D.-U. Shin, The polynomial behavior of weight multiplicities
for classical simple Lie algebras and classical affine Kac-Moody algebras, in Contemp. Math.,
248.
[F]
W. Fulton, Young Tableaux With Applications to Representation Theory and Geometry. Cambridge University Press, 1997.
[FH]
W. Fulton and J. Harris, Representation Theory A First Course. Springer, 2004.
[GW]
R. Goodman and N.R. Wallach, Representations and Invariants of the Classical Groups. Cambridge University Press, 1998.
[PS]
I. Penkov and K. Styrkas, Tensor Representations of Classical Locally Finite Lie Algebras,
Developments and Trends in Infinite-Dimensional Lie Theory, 2010.
[W]
http://mathworld.wolfram.com/DyckPath.html