Download Non-Colliding Brownian Paths and the GOE Eigenvalue Density

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Eigenvalues and eigenvectors wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Determinant wikipedia , lookup

Transcript
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
1. T HE K ARLIN -M C G REGOR T HEOREM
The Weyl chamber W =WN is the set of all points x = (x 1 , x 2 , . . . , x N ) ∈ RN whose coordinates
are (strictly) ordered, that is,
0 < x1 < x2 < · · · < x N .
We will be interested in N −dimensional Brownian paths that begin at a point x of W and do
not exit W by time t . (Recall that an N −dimensional Brownian motion started at x consists of
N independent one-dimensional Brownian motions started at x 1 , x 2 , . . . ; thus, the event that the
N −dimensional Brownian path does not exit W coincides with the event that N independent
one-dimensional Brownian particles started at different points experience no collisions.) Denote by P x the probability measure on path space governing the distribution of N −dimensional
Brownian motion Wt = (Wt1 ,Wt2 , . . . ,WtN ) started at x, and let p t (x, y) be the Gauss kernel
p
p t (x, y) = exp{−(y − x)2 /2t }/ 2πt .
Theorem 1. (Karlin-McGregor) Let τ be the time of first exit from the Weyl chamber W . Then for
all x, y ∈ W ,
(1)
P x {Wt ∧τ ∈ d y ∀ i ≤ N } = det(p t (x i , y j ))i , j ≤N
N
Y
dy
where d y = d y 1 d y 2 · · · d y N
i =1
Proof. Inclusion-exclusion plus reflection principle.
2. L ARGE -t A SYMPTOTICS
Theorem 2. (Grabiner) For each x ∈ W , as t → ∞,
(2)
± N
P x {τ > t } ∼ C N ∆N (x) t ( 2 )/2 ,
where ∆N (·) denotes the van der Monde determinant and C N > 0 is the normalizing constant for
the GOE eigenvalue density. Furthermore, for each x ∈ W , as t → ∞,
(3)
P x {Wt ∧τ ∈ d y} ∼ (2πt )−N /2 ∆N (x)∆N (y)
NY
−1
j =0
1 −(N )/2 −kyk2 /2t
t 2 e
dy
j!
uniformly for all y ∈ W whose coordinates satisfy y i < t 3/4 .
Remark 1. You can’t help but notice that the right side is, after rescaling, the GOE eigenvalue density – see Corollary 3 below. Grabiner proves a similar formula that holds for the Weyl chamber
of an arbitrary Coxeter group. The case considered here is the Weyl chamber of the permutation
group S N . Grabiner’s proof relies on properties of the Schur functions, whereas the proof below
is completely elementary, using only basic properties of the determinant.
1
2
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
Proof. The plan is to integrate the Karlin-McGregor density (1) over all y ∈ W . Observe first that
only points y whose coordinates satisfy y i < t 3/4 contribute substantially to the probability, because for a standard one-dimensional Brownian motion B t started at a > 0,
P a {B t > t 3/4 } ≤ P 0 {B t > t 5/8 }
Z
2
−N /2
= (2π)
e −z /2 d z
1/8
Zz>t
−N /2
≤ (2π)
exp{−t 1/8 z/2} d z
≤ 2(2π)
z>t 1/8
−N /2 −1/8
t
for all sufficiently large t , which is much smaller than
the entries of the Karlin-McGregor matrix to obtain
(4)
exp{−t 1/4 /2}
¡N ¢
− 2 /2
t
.
Next, factor the exponentials in
det(p t (x i , y j ))i , j = (2πt )−N /2 exp{−kxk2 /2t } exp{−kyk2 /2t } det(e xi y j /t )i , j .
The problem now is to analyze the large-t behavior of det(e xi y j /t ). Since x is fixed and y j <
t 3/4 , the quantity in the exponential becomes vanishingly small as t → ∞. Hence, we should be
able to deduce the behavior of the determinant by making Taylor series approximations to the
exponentials. Here is the complete power series expansion:
(5)
det(e xi y j /t )i , j = det(
∞
X
(x i y j /t )k /k!)i , j .
k=0
This exhibits each column of the matrix as an infinite sum. If the series is truncated at some
K < ∞ then the j th column is
 K
 
 
x1
1
x1
K
y j x K 
 1  y j  x2 
−K −1
 2
 +
 
).
(6)
· · · t  · · ·  + · · · + K !t K  · · ·  + O(t
K
1
xN
xN
If the error term O(t −K −1 ) is ignored, the determinant (5) can be evaluated by using the multilinearity rule for determinants. This implies that the (truncated) determinant can be written as
a sum of determinants, each obtained by choosing one of the summands in (6) for each of the N
columns.
How large must the truncation level K be in order that the asymptotic behavior of the truncated determinant is the same as that of det(e xi y j /t )? First, observe that in the sum (6), each
summand corresponds to an integer power 0 ≤ k ≤ K . In the decomposition obtained by using
the multilinearity rule, a different power k must be chosen for each of the N columns, because
otherwise the resulting matrix will have rank < N and hence determinant 0. (For example, when
N = 2, the choice k = 1 for both columns leaves the matrix
µ
¶
x1 y 1 x1 y 2
,
x2 y 1 x2 y 2
which has rank 1.) Consequently, we must have at least K ≥ N − 1 in order to obtain a nonzero
approximation. In fact, K = N − 1 is sufficient for first-order asymptotics. To prove this, we will
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
3
show that the matrix
Ã
M (t ; x, y) :=
NX
−1 (x
k=0
i yj)
k!t k
k
!
i , j ≤N
has positive determinant and therefore is is invertible for all x, y ∈ W . This will imply1 that
(e xi y j /t )i , j ≤N = M (t ; x, y)(I + R(t ; x, y))
where R(t ; x, y) = O(t −1/4 ) uniformly in the range kxk, kyk ≤ t 3/4 . It will then follows that
det(e xi y j /t )i , j ≤N = det M (t ; x, y)(1 + O(t −1/4 )).
(7)
So consider the determinant of the matrix M (t ; x, y). When this is expanded using multilinearity, the only nonzero determinants are those where each power k = 0, 1, . . . , N −1 is used once. The
number of such assignments of powers k to columns j is N !, one for each permutation σ ∈ S N .
For any such assignment, the power of t −1 in the resulting determinant is 0+1+2+· · ·+(N −1) =
¡N ¢
2 . Consequently,
 σ(1)−1

x1
x 1σ(2)−1 · · · x 1σ(N )−1
 σ(1)−1

N
−1 1 Y
X NY
N
x 2σ(2)−1 · · · x 2σ(N )−1 
x
σ( j )−1
det M (t ; x, y) = t −( 2 )
yj
det  2
.


···
σ∈S N k=0 k! j =1
σ(1)−1
σ(2)−1
σ(N )−1
xN
xN
· · · xN
If the rows of the matrix in this last determinant are permuted so that the various powers are in
increasing order, the result is a van der Monde determinant multiplied by the sign of the permutation (which is σ−1 ):
 σ(1)−1

 0

x1
x 1σ(2)−1 · · · x 1σ(N )−1
x 1 x 11 · · · x 1N −1
 σ(1)−1

 x 0 x 1 · · · x N −1 
x 2σ(2)−1 · · · x 2σ(N )−1 
x
2
2
2
 = (−1)σ ∆N (x).
det  2
 = (−1)σ det 


···


···
0
1
N −1
σ(1)−1
σ(2)−1
σ(N )−1
xN xN · · · xN
x
x
··· x
N
N
N
Hence,
N
det M (t ; x, y) = t −( 2 ) ∆N (x)
NY
−1
k=0
N
Y
1 X
σ( j )−1
yj
.
(−1)σ
k! σ∈S N
j =1
Now we recognize the inner sum over σ ∈ S N as another van der Monde determinant:
X
(−1)σ
σ∈S N
N
Y
σ( j )−1
j =1
yj
= ∆N (y).
The product of factorials is also a van der Monde determinant:
NY
−1
k=0
k! =
( j − i ) = ∆N ([N ])
Y
i < j <N
where [N ] = (0, 1, 2, . . . , N − 1) (poor notation, but brief). This leads to the simple formula
(8)
N
det M (t ; x, y) = t −( 2 ) ∆N (x)∆N (y)/∆N ([N ]),
1This is because the matrix operations in the group GL (R) are smooth – see any standard text on Lie groups or
N
matrix groups.
4
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
which shows that det M (t ; x, y) is positive and hence establishes the asymptotic relation (7). This
proves that as t → ∞, uniformly for kyk < t 3/4 ,
N
det(e xi y j /t )i , j ∼ t −( 2 ) ∆N (x)∆N (y)/∆N ([N ]),
(9)
2
and this, together with (4), implies (3). (The factor e −kxk /2t disappears because kxk2 /t → 0 as
t → ∞.)
Finally, we can use (9) in conjunction with equation (4) to evaluate the probability (2) by integrating over all y ∈ W . Recall that only those y whose coordinates satisfy y j < t 3/4 matter, and for
these the asymptotic formula (9) applies uniformly in y. Consequently,
Z
x
P {τ > t } =
det(p t (x i , y j ))i , j d y
y∈W
Z
∆N (x)
∆N (y) exp{−kyk2 /2t } d y
exp{−kxk2 /2t }
∆N ([N ])
y∈W
Z
−N /2 −(N2 )/2 ∆N (x)
t
= (2π)
∆N (y) exp{−kyk2 /2} d y
∆N ([N ]) y∈W
N
∆N (x)
= (2π)−N /2 t −( 2 )/2
CN ,
∆N ([N ])
where C N ispthe normalizing constant for the GOE eigenvalue density. (Note: In the change of
variable y/ t , the d y accounts for a factor t −N /2 , and the van der Monde determinant ∆N (y)
sucks up the remaining t −N (N −1)/2 .)
N
∼ (2πt )−N /2 t −( 2 )
Corollary 3. For each x, y ∈ W ,
p
(10)
lim P x/
t →∞
t
(W1 ∈ d y | τ > 1) = C N ∆N (x)∆N (y)e −kyk
2
/2
dy
Proof. This is just Brownian scaling. Recall that if Wt is a Brownian motion, then for any a > 0 so
is a −1/2Wat . Since the Weyl chamber W is left invariant by the scaling x 7→ a −1/2 x, it follows that
p
p
P x/ t {τ > 1 and W1i ∈ d y i ∀ i ≤ N } = P x {τ > t and Wti ∈ t d y i ∀ i ≤ N }
and
p
P x/
t
{τ > 1} = P x {τ > t }.
The result therefore follows from Theorem 2.
This gives an entirely new representation for the eigenvalue density of the GOE: Start N Brownian particles “at” 0; then the conditional joint distribution of their locations at time 1 given that
there are no collisions by time 1 is the GOE-eigenvalue distribution.
3. H ARMONIC F UNCTIONS , h−P ROCESSES , AND THE VAN DER M ONDE D ETERMINANT
3.1. The VDM determinant is harmonic. A harmonic function in RN is a smooth, real valued
function h that satisfies the Laplace equation
Ã
!
∂2
∂2
∂2
(11)
+
+ · · · + 2 h = 0.
∂x 12 ∂x 22
∂x N
P 2
2
The second-order operator i (∂ /∂x i ) is usually denoted by ∆, a convention I will adhere to
despite the use of the same letter ∆N for the van der Monde determinant. Now by is an interesting
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
5
and peculiar coincidence, the van der Monde determinant ∆N (x) just happens to be a harmonic
function of x, as the following proposition asserts.
Proposition 4. ∆∆N (x) = 0.
Proof. Use the representation
∆N (x) =
(12)
(−1)σ
X
σ∈S N
N
Y
i =1
x iσ(i )−1 ;
applying the Laplacian operator gives
(13)
∆∆N (x) =
N X
X
j =1 σ∈S N
(−1)σ (σ( j ) − 1)(σ( j ) − 2)x −2
j
N
Y
i =1
x iσ(i )−1 .
¡ ¢
Each nonzero term of this sum is a multiple of a monomial of degree N2 − 2 in which there is
(exactly) one repeated power k ≥ 0. This is because each term in the sum (12) is ±1 times a
monomial in which the variables x i occur to powers 0, 1, 2, . . . , N − 1; after applying ∂/∂x i one
of these powers is reduced by 2, so there is a “double”. Furthermore (and this is the important
point), each nonzero term in the sum (13) occurs twice, but with possibly different signs, because
for each term with a repeated power k there are two indices that could have been reduced by 2
in the differentiation.
Example: When N = 4, two of the terms in the sum (12) are +x 10 x 21 x 32 x 43 and −x 10 x 41 x 32 x 23 . When
the second partials ∂2 /∂x 42 and ∂2 /∂x 22 are applied to these monomials, the resulting terms will
be
+6x 10 x 21 x 32 x 41
and
− 6x 10 x 21 x 32 x 41 .
The monomial x 10 x 21 x 32 x 41 will not occur as a factor in any other term of (13).
Claim: For each matching pair, the corresponding permutations σ, σ0 have opposite signs.
Proof of the claim: To fix the idea, consider first the example above. The two permutations leading to the two matching terms are
σ = (1, 2, 3, 4) :
0
σ = (1, 4, 3, 2) :
1 7→ 2 7→ 3 7→ 4 and
1 7→ 4 7→ 3 7→ 2.
These permutations are of opposite signs, because the second is obtained from the first by composing σ with the transposition (2, 4). (Note: multiplication by a transposition reverses the sign
of a permutation.) In the general case, if two matching permutations σ, σ0 are such that
σ(i ) = k − 2 σ( j ) = k
σ0 ( j ) = k − 2 σ0 (i ) = k
and σ(l ) = σ0 (l ) otherwise, then σ0 = τ ◦ σ where τ is the transposition τ = (k − 2, k).
The result follows directly from the claim, because the claim shows that each nonzero term of
(13) occurs once with a + and once with a −.
6
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
3.2. Harmonic Functions and h−Processes.
Proposition 5. Let h : RN → R be a harmonic function on RN such that h and its first partial
derivatives grow at most polynomially at infinity, that is, for some m < ∞
(14)
lim |x|m (|h(x)| + |∇h(x)|) = 0.
|x|→∞
Then h satisfies the following mean value property: for each x ∈ RN and each t ≥ 0,
(15)
E x h(Wt ) = h(x).
Proof. It suffices to show that the time derivative of the expectation is constant. This is a direct
consequence of Itô’s formula, but here is an elementary proof. First, check that the Gauss kernel
(2πt )−N /2 exp{−k(x − y)2 /2t k} satisfies both the backward and forward heat equations
∂
1
p t (x, y) − ∆x p t (x, y) = 0 and
∂t
2
∂
1
p t (x, y) − ∆ y p t (x, y) = 0.
∂t
2
This is routine. Now differentiate under the integral and then integrate twice by parts (all of this
is justified by the polynomial growth hypothesis) to obtain
Z
d x
∂
E h(Wt ) =
p t (x, y)h(y) d y
dt
∂t
Z
= (∆ y p t (x, y))h(y) d y/2
Z
= p t (x, y)∆ y h(y) d y/2 = 0.
Corollary 6. Let h : RN → R be a harmonic function on RN satisfying the polynomial growth hypothesis of Proposition 5. Then for each x ∈ RN , the process h(Wt ) is a martingale under the probability measure P x .
Proof. This follows from the Markov property for Brownian motion. Denote by Ft the σ−algebra
generated by the random variables Ws , s ≤ t . Then by Proposition 5,
E x (h(Wt +s ) | Ft ) = E x (h(Wt +s ) |Wt )
= E Wt h(Wt +s )
= h(Wt ).
Note: The polynomial growth hypothesis guarantees that the random variables h(Wt ) have finite
first moments.
This argument doesn’t really depend on the hypothesis that the underlying stochastic process
is Brownian motion: all that is needed is the Markov property and the mean value property. In
general, if Y t is a Markov process on some state space Y , a real-valued function h onY is said
to be harmonic with respect to Y t if the mean value property E y h(Y t ) = h(y) holds for all y ∈ Y
and t ≥ 0. If h is harmonic function, then h(Y t ) is a P y −martingale, by the same reasoning as in
Corollary 6.
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
7
Suppose now that the Markov process in question has transition probability densities with
respect to a reference measure µ on the state space. If h is a positive harmonic function for the
Markov process, then it may be used to construct a new system of transition probability densities
by the following rule:
(16)
q t (x, y) = p t (x, y)
h(y)
.
h(x)
The Markov process with transition probabilities q t (x, y) is said to be the h−transform of the
original process. This construction can be extended to the case of nonnegative harmonic functions h by restricting the state space of the h−transform to the subset of the original state space
where h > 0.
Remark 2. I won’t try to be precise about the hypotheses on the state space, the dominating
measure µ, etc., because we will only make use of the special case where the original Markov
process is Brownian motion on W¯ with absorption at the boundary ∂W .
The next order of business is to give a second characterization of the h−process via likelihood
ratios. This will depend on the fact that if h is a nonnegative harmonic function for the Markov
process Y t , then h(Y t ) is a nonnegative martingale relative to the natural filtration
FtY = σ(Y s )s≤t .
Proposition 7. Let Z t be a nonnegative martingale under a probability measure P with respect to
a filtration Ft , and suppose that Z0 = 1. Then for each T < ∞, a probability measure Q T can be
defined on the σ−algebra FT by the following specification of its Radon-Nikodym relative to P :
¶
µ
dQ T
= ZT .
(17)
d P FT
Moreover, the system of probability measures Q T is consistent, in the sense that the restriction of
Q T +S to FT is Q T . Consequently, these probability measures have a unique extension Q ∞ to the
σ−algebra F∞ generated by ∪T ≥0 FT .
Remark 3. The extension Q ∞ will not in general be absolutely continuous with respect to P on
F∞ ; this will be the case if and only if the P −martingale Z t is uniformly integrable. (Proof: Exercise.)
Proof of Proposition 7. That (17) defines a probability measure on FT is a simple consequence of
the monotone convergence theorem for integrals. The consistency property is proved by appealing to the uniqueness of Radon-Nikodym derivatives. Since ZT is measurable with respect to FT ,
this implies that it suffices to show that for any other bounded, FT −measurable, nonnegative
random variable Y ,
E P ZT +S Y = E P ZT Y .
But this follows immediately from the hypothesis that the process Z t is a P −martingale.
Assume now that h is a nonnegative harmonic function relative to a Markov process Y t , and
denote by P x the probability measure under which Y0 = x. Then under P x then the process Z t =
h(Y t )/h(x) is a likelihood ratio martingale satisfying the hypotheses of Proposition 7 provided
the initial state x is such that h(x) > 0. Consequently, for each such state x there is a probability
measure Q x whose restriction to FT has Radon-Nikodym derivatives h(YT )/h(x).
8
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
Proposition 8. Under Q x the process Y t is the h−process associated with the original Markov process, that is, it is Markov with transition probabilities q t (x, y) given by (16).
Proof. To prove that the process Y t is a Markov process under Q y , if suffices to prove that for any
bounded, measurable function f : Y → R, and y ∈ Y , and all t ≥ 0,
y
Y
EQ ( f (Y t +s ) | σ(Yr )r ≤t ) = EQt f (Y s ).
This is easily accomplished using the hypothesis that under the original measure P y the process
is Markov, together with the (crucial) fact that the likelihood ratios Z t = h(Y t )/h(y) are functions
only of the current state. Here goes: For simplicity assume that h(y) = 1. Let X be any bounded
random variable measurable with respect to σ(Yr )r ≤t ; then
y
y
EQ X f (Y t +s ) = E P X h(Y t +s ) f (Y t +s )
y
y
y
y
y
Y
= E P X E P (h(Y t +s ) f (Y t +s ) | σ(Yr )r ≤t )
= E P X E P (h(Y t +s ) f (Y t +s ) | σ(Y t ))
= E P X E P t h(Y s ) f (Y s )
y
Y
= E P X h(Y t )EQt f (Y s )
y
Y
= EQ X EQt f (Y s ).
Hence, {Q y } y∈Y is a Markov system. The proof that it has transition probabilities (16) is left as an
exercise.
3.3. Conditioned Brownian Motion. Our interest is in the case where the Markov process is
Brownian motion in W¯ with absorption at ∂W and the (nonnegative) harmonic function is h(x) =
∆N (x). The corresponding h−process has transition probabilities
q t (x, y) =
∆N (y) ∗
p (x, y)
∆N (x) t
where p t∗ (x, y) are the transition probabilities for the Brownian motion with absorption at ∂W ,
i.e.,
p t∗ (x, y)d y = P x {Wt ∧τ ∈ d y},
where τ is the time of first exit from W . Denote the probability measures governing the h−process
by Q x . Our next objective is to re-interpret these measures as conditional distributions.
For each R > 0, define the stopping time
νR := inf{t ≥ 0 : ∆N (Wt ) ≥ R}.
Lemma 9. For each 0 < R < ∞ and x ∈ W ,
P x {τ ∧ νR < ∞} = 1.
Proof. The (N − 1)−dimensional process
(Wt2 − Wt1 ,Wt3 − Wt2 , . . . ,WtN − WtN −1 )
is a Brownian motion in RN −1 . The event that the limsup of the minimum entry mini Wti − Wti −1
converges to ∞ is a tail event (exercise: why?), hence has P x −probability either 0 or 1. To show
that it cannot be 0 it will suffice to show that for every R < ∞ the probability that the minimum
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
9
entry mini Wti − Wti −1 exceeds R at some large t is at least 1/2N −1 . This follows because the
differences Wti − Wti −1 are independent Brownian motions, and for any x ∈ W and R < ∞
1
lim P x {Wt2 − Wt1 > R} = .
2
t →∞
The lemma implies that for any x ∈ W and 0 < R < ∞ the process Z tR = ∆N (Wt ∧τ∧νR )/∆N (x)
stabilizes at one of the two values 0 or R/∆N (x) almost surely (under P x ). Moreover, since the
process is bounded, it follows that it is uniformly integrable, and therefore L 1 −closed. Consex
quently, by Remark 3, the probability measures Q T,R
defined by
x
dQ T,R
dPx
=
∆N (Wτ∧νR ∧T )
∆N (x)
x
can be extended to a probability measure Q ∞,R
on the entire σ−algebra Fτ∧νR , with RadonNikodym derivative
x
dQ ∞,R
(18)
dPx
=
∆N (Wτ∧νR )
∆N (x)
.
This random variable assumes only two values, 0 or R/∆N (x). Set
)
(
)
(
x
x
dQ ∞,R
dQ ∞,R
R
=
and B R =
=0 .
AR =
dPx
∆N (x)
dPx
Since the likelihood ratio takes the value 0 on B R , it follows that the measure Q x has its support
contained in A R .
x
Lemma 10. The measure Q ∞,R
is the conditional distribution (on the σ−algebra Fτ∧νR ) of the
x
probability measure P given the event A R , that is, for every event F ∈ Fτ∧νR ,
x
Q ∞,R
(F ) = P x (F | A R ).
Proof. Both sides are probability measures, so the equality is a direct consequence of the fact
x
that the likelihood ratio dQ ∞,R
/d P x is 0 on B R and constant on A R .
Proposition 11. For any x, y ∈ W and each time 0 ≤ t < ∞,
(19)
x
Q x {Wt ∈ d y} = lim Q ∞,R
{Wt ∈ d y} = lim P x (Wt ∈ d y | A R ) =
R→∞
R→∞
∆N (y) x
P {Wt ∧τ ∈ d y}.
∆N (x)
Remark 4. The second limit in (19) can be interpreted as the “conditional probability that Wt ∈
d y given that it never exits the Weyl chamber”. Thus, formula (19) gives a relation between Brownian motion killed at ∂W with Brownian motion conditioned never to exit W .
Proof. The event A R coincides with the event νR < τ. Moreover, νR → ∞ as R → ∞, by the continuity of ∆N and the continuity of Brownian paths, so νR ∧ t → t . By the preceding lemma,
x
the conditional probability P x (·|A R ) coincides with Q ∞,R
−probability. This implies the second
10
NON-COLLIDING BROWNIAN PATHS AND THE GOE EIGENVALUE DENSITY
equality in (18) (the equality of the two limits). For the third equality, proceed as follows:
x
P x (Wt ∧τ∧νR ∈ d y | A R ) = Q ∞,R
(Wt ∧τ∧νR ∈ d y)
= E x 1{Wt ∧τ∧νR ∈ d y}∆N (Wτ∧νR )/∆N (x)
= E x 1{Wt ∧τ∧νR ∈ d y}∆N (Wt ∧τ∧νR )/∆N (x)
= E x 1{Wt ∧τ∧νR ∈ d y}∆N (Wt ∧τ∧νR )/∆N (x)
= E x 1{Wt ∧τ∧νR ∈ d y}∆N (y)1{t < τ ∧ νR }/∆N (x)
→ P x {Wt ∈ d y and τ > t }∆N (y)/∆N (x).
The equality in the second line follows from (18); the third line follows because ∆N (Wt ) is a martingale under P x , since the event {Wt ∧τ∧νR ∈ d y} is in the σ−algebra Ft ∧νR ∧τ ); and the fourth
line follows because on the event {Wt ∈ d y} it must be the case that t < τ, and as noted earlier
νR → ∞.
Finally, observe that the last probability P x {Wt ∈ d y and τ > t }∆N (y)/∆N (x) defines a measure on d y that is the h−transform of Brownian motion with absorption at ∂W on the σ−algebra
Ft ∧τ . Since Q x is the unique extension of the h−process to Fτ , it follows that Q x {Wt ∈ d y} coincides with the other three quantities in (18).
4. C ONDITIONED B ROWNIAN M OTION AND THE GUE
And now the payoff. The last expression in the chain of equalities (19) contains the factor
P x {Wt ∧τ ∈ d y}, the Karlin-McGregor probability (1). By Grabiner’s formula (3), as t → ∞ this
probability
behaves like a rescaled GOE eigenvalue density. In particular, if x and y are scaled
p
by 1/ t and time is scaled by 1/t , the probability P x {Wt ∧τ ∈ d y} becomes the probability in
Corollary 3. But the ratio ∆N (y)/∆N (x) is invariant by this rescaling:
p
∆N (y/ t ) ∆N (y)
p =
∆N (x/ t ) ∆N (x)
Hence, (18) and Corollary 3 imply that
p
lim Q x/
t →∞
t
{W1 ∈ d y} = C N ∆N (y)2 e −kyk
2
/2
d y.
Observe that in (10) ∆N (x) appears in the numerator, whereas in (19) it appears in the denominator, so these factors cancel. However, ∆N (y) occurs in the numerator of both expressions, and
so it is squared, leaving us with the GUE eigenvalue density! This proves
Theorem 12.
(20)
p
lim lim P x/
t →∞ R→∞
t
(W1 ∈ d y | A R ) = C N ∆N (y)2 exp{−kyk2 /2}.