Download Research supported in part by an NSF grant

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Vector space wikipedia , lookup

Brouwer fixed-point theorem wikipedia , lookup

Lp space wikipedia , lookup

Transcript
Equivalence and Perpendicularity of Local Field
Gaussian Measures
By
Steven N. Evans*
Technical Report No. 252
May 1990
*Research supported in part by an NSF grant
Department of Statistics
University of Califomia
Berkeley, California
EQUIVALENCE AND PERPENDICULARITY OF LOCAL FIELD
GAUSSIAN MEASURES
BY
STEVEN N. EVANS*
1. Introduction.
One way of thinking about Gaussian measures is that they are the class of probability measures that naturally arise when we seek measures with properties that
are intimately linked to the linearity and orthogonality structure of the spaces on
which the measures are defined.
There are fields other than R or C for which there is a well-developed and
interesting theory of orthogonality for the vector spaces over them. These fields are
the so-called local fields. In Evans (1989) we worked from the above perspective
and defined a suitable concept of a "Gaussian" measure on vector spaces over
local fields. In many particulars the theory of such measures resembles the
Euclidean prototype, but there are a number of interesting departures.
Here we continue this investigation with a consideration of various questions
concerning equivalence, absolute continuity and perpendicularity of local field
Gaussian measures.
We begin in §2 with some preliminaries regarding both the general theory of
local fields and the particular properties of local field Gaussian measures.
In §3 we observe that, unlike the Gaussian case, one local field Gaussian measure can be absolutely continuous with respect to another without the two measures
being equivalent and the only time two such measure will be equivalent is when
they are equal.
Theorem 4.1 is a "Cameron-Martin"-type result on the effect of translating
local field Gaussian measures. As a consequence, we show in Theorems 4.2 and
4.3 that the local field Gaussian measures on a Banach space are precisely the normalised Haar measures on compact additive subgroups which satisfy an extra "convexity" condition. This allows us to conclude that there is a "flat" local field
Gaussian measure on a Banach space if and only if the space is finite dimensional
*
Research supported in part by an NSF grant
(see Corollary 4.4).
In Theorem 5.1 we examine the effect of contracting or dilating a local field
Gaussian measure, and observe that we get qualitatively different behaviour depending on whether the measure is "finite dimensional" or "infinite dimensional". In
the Banach space case we show that the "dimension" of the measure shows up in
the mass assigned to small balls around zero (see Theorem 5.2).
2. Preliminaries
We begin this section with a brief overview of some of the theory of local
fields. We refer the reader to Taibleson (1975) or Schikhof (1984) for fuller
accounts. Later we also recall some of the salient details from Evans (1989)
regarding local field Gaussian measures.
Let K be a locally compact, non-discrete, topological field. If K is connected,
then K is either R or C. If K is disconnected, then K is totally disconnected and
we say that K is a local field.
From now on, we let K be a fixed local field. There is a distinguished realvalued mapping on K which we denote by x '-4 I x I and call the valuation map.
The set of values taken by the valuation is the set {qk: k E Z} u {O}, where q = pc
for some prime p and positive integer c. The valuation has the following properties:
IxI=O <=> x=O;
Ixyl = lxl lyl;
Ix+yI < IxIVIYI.
The last property is known as the ultrametric inequality and implies that if I x I y I
then I x + y I = I x I \/ I y I - the so-called isosceles triangle property. The mapping
(x, y) - I x - y I on K x K is a metric which gives the topology of K.
There is a unique measure p on K for which
p.(x+A)
g(A), x E K,
g(xA) = IxIp(A), xeK,
({x E K: Ixl < 1)) = 1.
=
The measure p is Haar measure suitably normalised.
There is a character X on the additive group of K with the properties
X({x: lxl ' 1))
=
(1)
X(fx: Ixi < qj)
*
(1).
and
For N = 1,2,..., the correspondence X *- Xx, where x (x) = X(X. x) establishes
an isomorphism between the additive group of KN and its dual.
Let E be a vector space over K. A norm on E is a map 11 IIE: E - [O,oo[
such that
IIXIIE=O <=-> X=O;
IIXXIIE = IXIIIXIIE, XEK;
IIX+YIIE
<
IIXIIEVII1YIIE-
The last property is also called the ultrametric inequality and implies the obvious
generalisation of the isosceles triangle property. We call the pair (E,II IIE) a
normed vector space (over K).
If E is complete in the metric (x, y) i-+ 11 X - Y IIE we say that E is a Banach
space. For N = 1,2,... the space (KN, I 1), where
l(Xl,-, XN)l
=
IX1IV
\JVIXNI,
is a Banach space. More generally, if (Q, F, P) is a probability space and we let L°
be the set of measurable functions f: n -+ K such that ess sup{l f(co) : X E Q} < 00
then L' becomes a Banach space when we equip it with the norm 11 Il. defined by
1I f Lo = ess sup{ f(o) I cO E 2} (we adopt the usual convention that we regard two
functions to be equal if they are equal almost surely).
A subset C of a normal vector space E is said to be orthogonal if for every
finite subset {xl,. . . , XN) c C and each X1, . . . , XN E K, we have
N
11 .£ XiXi IIE
i=1
N
=
\/iY- I xi I 11 xi IIE
If, moreover, 11 x IIE = 1 for all x e C, then C is said to be orthonormal.
We now recall from Evans (1989) our general definition for the local field
analogue of Gaussian measures. Let E be a measurable vector space (over K). Let
E1 and E2 be two copies of E and write Xi: E1 x E2 -> Ei, i = 1,2, for the two
coordinate maps. A measure P on E is said to be K-Gaussian if for every pair of
of
orthonormal
vectors
the
law
(2 1, a22) e K2
((11, a12),
(c1llXl + 12X2,X2 a2lXl + a22X2) under P x P is also P x P.
Note: in future when we consider measure on a measurable vector space E we will
always reserve the notation X for the identity map on E.
The K-Gaussian measures on E = K are those measures P such that X E L' (P)
and J (4x) P (dx) = 1 (IIX II,. I 1), where we let (I' denote the indicator function of
the interval [ 0, 1 ]. Thus, either X = 0 or
P(dx)
=
IIXII- O(IIXIIol IxI)g(dx).
The theory of K-Gaussian measures is particularly tractable when B, the (5-field
on the measurable vector space E, is the a-field generated by some collection, F, of
linear functionals on E. In this case we say that the triple (E,F,B) satisfies the
hypothesis (*) of Evans (1989). One example is the case when E is a separable
Banach space, F = E* (the dual of E) and B is the Borel a-field of E.
If (E, F, B) satisfies the hypothesis (*) then a measure P on E is K-Gaussian if
and only if T (X) is a K-valued, K-Gaussian random variable for all TE span F.
Moreover, P is then uniquely determined by the laws of the individual random variables T (X), T E span F, and hence by the set of numbers 1I T (X) IL., T e span F.
3. Equivalence
A well-known feature of the Gaussian theory is, in a variety of general settings,
that two Gaussian measures on the same space are either equivalent or perpendicular. We refer the reader to Kuo (1975) for a discussion of such results and some
relevant references. An analogous theorem certainly doesn't hold for the KGaussian theory. For example, suppose that P and Q are two K-Gaussian measures
on E = K such that IIX II. = 1 in LU(P) and IIXII.. = q in L'(Q). Then P Q but
P and Q are not equivalent. As the following theorem shows, equivalence is a
much more restrictive condition in the K-Gaussian case.
.
Theorem 3.1. Suppose that (E, F, B) satisfies the hypothesis (*). Let P and Q be
two K-Gaussian measures on E. Then P Q if and only if P = Q.
-
Proof. Suppose that P Q. Then 11T (X) Il,, is the same in L' (P) and L° (Q) for
all T E span F. As T (X) is K-valued, K-Gaussian for all such T we see that the
distribution of T (X) under P is the same as that under Q, and hence P = Q.
a
The converse is obvious.
-
4. Translation
Suppose that Z is a real-valued, centred, Gaussian process indexed by some set
I. Let H be the corresponding reproducing kernel Hilbert space. If z E R1I then it is
known that the laws of Z and z + Z are either equivalent or perpendicular depending upon whether or not z E H (see, for example, Feldman (1958) or Hajek (1959)).
In the K-Gaussian case we have the following analogue.
Theorem 4.1. Suppose that (E, F, B) satisfies the hypothesis (*). Let P be a KGaussian measure on E. Set
<
{x E E: IT(x)I.IT(X)
II1 all TE spanF}.
If x E S then the law of x + X under P is P itself. Otherwise, the law of x + X
under P is perpendicular to P.
S
=
Proof. Note that if Y is a K-valued, K-Gaussian random variable and y E K is such
that I y I c II Y II. then, since the law of Y is Haar measure on the subgroup
{z: IzI < IIYIIJ), we see that the law of y + Y is that of Y. Hence, if x E S we
have that the law of T (x + X) = T (x) + T (X) under P is that of T (X) under P for
all T E span F. Thus the law of x + X under P is that of X under P.
If x d S then there exists T E span F such that I T (x) I > II T (X) 11oo- Then, by the
isosceles triangle property, I T (x + X) I = I T (x) I P-almost surely and hence the law
of x + X is perpendicular to P.
O
We remark that S determines P uniquely. Also, S is an additive subgroup of E
which is closed under multiplication by scalars X E K such that I a I < 1. If we
combine parts (i) and (ii) of the following result with Theorem 4.1 we see when E
is a separable Banach space that the group S supports P, S is compact and that P is
just normalised Haar measure on S. Part (iii) extends Theorem 6.1 of Evans
(1989), where it was shown that if P is a K-Gaussian measure on measurable vector
space (E, B) and M is a measurable subspace of E then either P (M) = 0 or 1. Here
we see that if E is a separable Banach space then it is possible to give an analytic
condition which determines what branch of the dichotomy holds. No comparable
condition on the covariance structure seems to be known for the various Gaussian
analogues of this zero-one law.
Theorem 4.2. Suppose that (E, 11 IIE) is a separable Banach space with dual E*
and P is a K-Gaussian measure on E. Set
S = {xe E: IT(x)I <IIT(X)II.,, all TE E*}.
(i) The group S is the closed support of P.
(ii) The group S is compact.
(iii) If M is a measurable vector subspace of E then P (M) is either 1 or 0,
depending on whether or not S c M.
Proof. (i) It is clear that S is closed. Let (Ti)il1 be a countable dense subset of
E*. Then
P(S) = P(ri xe E:
i=l
ITi(x)I IITi(X)IIJ)
= 1.
Conversely, suppose that x e S and U is an open neighbourhood of x. Let {xi)}°
be a countable dense subset of S. Then u'l [ xi + (U - x) ] covers S and hence
P (xi + (U - x)) > 0 for at least one i; but, by Theorem 4.1,
P (U) = P ((xi - x) + U), since xi - x e S.
(ii) As E is complete and separable, all probability measures on E are tight and
so there exists a compact set C C S such that P (C) > 0. Let G be the smallest
closed additive group containing C. We claim that G is also compact. Given
£ > 0, there exists a finite set {x, ... , Xr)} c C such that if x E C then
lIx - xIlIE < e for some x£. The smallest closed group containing {xl,... , X:£}
is G' = (D x xl) + * + (D x x4(S)) where D is the ring of integers in K, that is,
D = {k E K: Ikl < 1). Clearly, GI is compact. Moreover, from the ultrametric
inequality it is clear that if x E G, then there exists y E G£ such that IIX - Y IIE < E.
Thus G is totally bounded and hence compact.
Part (ii) will now follow if G has only finitely many distinct cosets in S; but this
must be the case, since otherwise we could find infinitely many disjoint cosets
G1, G2,... for which, by Theorem 4.1, P (Gi) = P (G) > 0.
(iii) From Theorem 6.1 of Evans (1989) we know that P (M) is either 0 or 1. If
S c M then it follows from (i) that P (M) = 1. Conversely, suppose that P (M) = 1.
If there exists x E S such that x d M then M and x + M are disjoint; but this is
impossible, since P (x + M) = P (M) = 1 by Theorem 4. 1.
O
-
-
The converse to Theorem 4.2 holds.
Theorem 4.3. Suppose that (E, 11 IIE) is a separable Banach space and that G is a
compact additive subgroup of E such that aG c G when a E K with I cx I < 1. Let
P be normalised Haar measure on G. Then P is a K-Gaussian measure for which
S = G.
Proof. It follows from Corollary 7.4 of Evans (1989) that P is K-Gaussian. Part (i)
of Theorem 4.2 shows that S = G.
O
If (H, < *, >) is a real, separable Hilbert space then it is well-known that there
exists a probability measure P on H such that J e<XY> P (dy) = e-<X,x>/2 for all
x E H if and only if H is finite dimensional. The corresponding result in our setting
is the following.
I
Corollary 4.4. Let (E, 11 IIE) be a separable Banach space. There exists a probability measure P on E such that x (Ty) P (dy) = D (II TIIE*) for all T E E* if and
only if E is finite dimensional.
Proof. Suppose that E is infinite dimensional and such a probability measure P
exists. It is clear that P is K-Gaussian and II T (X) II. = II TIIE* for all T e E*.
Therefore we have
S = {x: IT(x)l IITIIE* all Te E*}
and so S 2(x: II XIIE < 11 (in fact, we have equality). Since the unit ball of E is
certainly not compact, this contradicts part (ii) of Theorem 4.2.
Suppose, on the other hand, that E has finite dimension n. Let el, ... , en be
an orthogonal basis for E (see Theorem 50.8 of Schikhof (1984) for the existence of
such a basis). By rescaling, we may assume that qC1 < 11 ei lIE < 1, 1 < i < n, so that
{x E E: IIXIIE < 1) = {£o iei: vill I cqI < 1) and hence IITIIE* = vi71ITeiI for each
T e E*. Let Xl, . . . , Xn be independent, K-valued, K-Gaussian random variables
for which IIXiLII = 1, so that X1, . . . , Xn are orthonormal in L' (see Theorem 7.5
of Evans (1989)). Set X = iXiei. Then X is K-Gaussian and
11 T (X) lloo = \vinl I Tei 1, so the law of X is the probability measure we are seeking. E
5. Contraction
Suppose that {Z(i): i E I) is a real-valued, centred, Gaussian process on some
index set I. Using, for example, Theorem II. 4.3 in Kuo (1975) it is not difficult to
see that if cC E R \ {-1, 0, 1) then the law of cc Z is either equivalent or perpendicular to the law of Z depending on whether the subspace of L2 spanned by
{Z (i): i E II is finite dimensional or infinite dimensional. The corresponding result
holds in our setting.
Theorem 5.1. Suppose that (E,F,B) satisfies the hypothesis (*) and P is a KGaussian measure on E. Given X E K with 0 < IaI < 1, let Q be the law of oX.
Then either Q < P or Q P, depending on whether the subspace of L' (P) spanned
by {T(X): T e F) is finite dimensional or infinite dimensional.
Proof. Suppose that the dimension of the subspace of L (P) spanned by
{T(X): T E F) is m < oo. Then there exists T1 ,...,T E spanF such that
span{T* (X),. .. , T*(X)} = span{T(X): T E F) and T (X), . . . , T*(X) are
orthonormal in L (P) (see Theorem 50.8 of Schikhof (1984)). From Theorem 7.5
of Evans (1989), Ti (X),..., T (X) are independent. If g is a B-measurable
function then g is of the form g (x) = G (T1 (x), T2 (x),...) for some measurable function G on KM and some sequence {TJ) c F. We may find coefficients ,ij E K,
1 < i < coo. Define
1 < i <oo, 1 <j < m, such that Ti(X) = ,T
G*: Knm -e by G* (t1,... , tm) = G ((P ij tj)1l). A straightforward calculation
of the density of the law of (T" (X), . . . , T* (X)) with respect to gm shows that
Q[g(X)]
=
P[g(aX)]
=
P[G(oTj(X)Q1)]
=
P[G* (axT1(X), . . . ,aTm (X)) ]
=
and so Q <c P.
I-MP[G*(T* (X) [
=P[g(X)],
](X))
Suppose now that the dimension of the subspace of L' (P) spanned by
{T (X): T e F) is infinite dimensional. Then we may find a sequence (Ti) c span F
such that T1 (X), T2 (X),... are orthonormal in L' (P) and hence independent (see
Theorem 7.5 of Evans (1989)). The event ITiT(X)I < IaI, 1 < i <00) has probability zero under P and probability one under Q, so that P ± Q.
O
When E is a Banach space the dimension appearing in Theorem 5.1 shows up in
the probability of small balls.
Theorem 5.2. Suppose that (E, 11
Gaussian measure on E. Let
IIE) is a separable Banach space and P is a K-
m = dim {T (X): TEE*E
Then
l0gqP(11X11E
m = - limlgP(IlI
n
n--.>c-
5
q7n)
qf)
Proof. The result is obvious if m = 0, since in that case X = 0 almost surely. Suppose next that 1 < m < oo. We may find T1, . . . , Tm E E* such that
{T1(X),... , Tm(X)} forms an orthonormal basis for {T(X): T e E*} in L'(P).
Consequently, {T1 (X), . . . , Tm (X)) are independent random variables. Let (ei)i'1
be a basis for E. If ni: E - K, 1 < i < oo, is the ii coordinate map then ni e E*
and so we have
l
= l;-
j (X) e
X =
=
(D"
cqj Tj (X))ei
E;,N Tj (X) (£i- alj ei)
=a
£M Tj (X) fj,
J=
say, for some choice of coefficients (0c%). It is easy to see that {ff, ... , fm) must
be linearly independent, otherwise we would have a contradiction to the linear
independence of (T1(X), ... , Tm (X)). Thus (XI, ... , xm) '-+Il 'jTlxjfjllE is a
norm on Km. Since all norms on Km are equivalent (see Theorem 13.3 of Schikhof
(1984)) there exists a constant c > 1 such that
C-1 \/j=ml I Tj (X) I < 11 X IIE < C\/j'Tl I Tj (X) I
and the result now follows easily.
Suppose finally that m = oo. We may then find a sequence (Ti)i' c E* such
that (Ti (X))'j are orthonormal in L' (P) and hence independent. Then
P (11 X IIE
<
q7n)
<
P (nl.*-l
t I Ti (X) I < 11 Ti IIE* q7n))
-
P (I Ti (X) I
< || Ti IIE*
qn),
and so -limn logqP (IIX IIE < qf) / n = oo, as required.
O
REFERENCES
[1]
Evans, S.N. (1989). Local field Gaussian measures. In Seminar on Stochastic Processes 1988 (E. Cinlar, K.L. Chung, R.K. Getoor eds.) pp.121-160.
Birkhauser.
[2]
Feldman, J. (1958). Equivalence and perpendicularity of Gaussian processes.
Pacific J. Math. 4, 699-708.
[3]
Hajek, J. (1959). On a simple linear model in Gaussian processes. In Trans.
Second Prague Conf. Information Theory, pp.185-197.
[4]
Kuo, H.-H. (1975). Gaussian measures in Banach Spaces. Lecture Notes in
Mathematics 463. Springer.
[5]
Schikhof, W.H. (1984). Ultrametric Calculus. Cambridge University Press.
[6]
Taibleson, M.H. (1975). Fourier Analysis on Local Fields. Princeton
University Press.
Department of Statistics
University of California
367 Evans Hall
Berkeley, CA 94720
U.S.A.