Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
3 Sample paths of the Brownian motion In this lecture we discuss some properties which are shared, with probability 1, by sample paths of Brownian motion. We have already seen that there are positive results (paths are continuous, and H¨ older continuous with arbitrary parameter γ less than 12 on bounded intervals, with probability 1). We can ask about differentiability. The answer is surprising: a theorem of Dvoretzky, Erd¨ os and Kakutani says that sample paths are (almost surely) nowhere differentiable on R + . Then, if continuous, nowhere differentiable functions are considered as pathological in mathematical analysis, they seem to be the norm for trajectories of Brownian motion. We will not prove the previous result, but the following simple reasoning can give a justification for the theorem, by considering differentiability at t = 0. Remember that, by Proposition 2.18, the process Yt = tB1/t is a Brownian motion, with Y0 = 0. Since Yt = B1/t t the derivative of Yt exists for t = 0 if and only if Bt has a limit as t → ∞. But, in fact, we have sup Bn = +∞, n∈N inf Bn = −∞ n∈N (3.1) with probability 1; hence, almost surely, paths of the Brownian motion Yt are not differentiable at 0. Of course, the non-differentiability of Brownian paths can be phrased by saying that Brownian particles show at no point a finite velocity, thus they can only be regarded as an approximating model of the physical reality. Maxima for Brownian paths The material contained in this lecture can be found in many classical textbooks, as Doob [Do53], Freedman [Fr71] or L´ evy [L´e54, L´e65]. We consider a real Brownian motion {Bt , t ≥ 0} on a stochastic basis (Ω, F, {Ft }, P). We will often write B(t) instead of Bt . Proposition 3.1. For almost every ω, the function B(·, ω) is monotone in no interval. 34 3 Sample paths of the Brownian motion Proof. We can simplify the situation. First, it suffices to prove the result for an interval with rational extremes; then, thanks to Exercise 2.7, we can reduce to prove that, a.s., no path is monotone nondecreasing on [0, 1]. Define, for every n, the events \ n−1 i i An = ω : B( i+1 B( i+1 ) − B( ) ≥ 0 for i = 0, . . . , n − 1 = n n n ) − B( n ) ≥ 0 . i=1 Note that the set of nondecreasing paths on [0, 1] is contained in ∩ An . But n P(An ) = n−1 Y 1 i P( B( i+1 n ) − B( n ) ≥ 0 ) = 2n → 0 i=0 (3.2) and the proof is complete. 2 Problem 3.1. Prove claim (3.2). Let us recall that a continuous function f has a local maximum at t if there exists ε > 0 such that f (s) ≤ f (t) for all s ∈ (t − ε, t + ε). The maximum is strict if the inequality is strict, i.e., f (s) < f (t) for all s ∈ (t − ε, t + ε). We shall need the following properties of continuous functions. Lemma 3.2. Let f be a continuous function on [0, 1], monotone in no interval. Then, the set of local maxima of f is dense in [0, 1]. Proof. Using time scaling invariance, it is sufficient to prove the existence of just one local maximum in [0, 1]. At first, let us assume that f (0) < f (1), and denote z the last point in [0, 1] such that f (z) = f (0). Then, since f (x) > f (z) = f (0) for all x ∈ (z, 1], but the function is not monotone increasing in [z, 1], hence there must be z < a < b < 1 with f (a) > f (b) > f (z). In the interval [z, b], the function is initially increasing (since f (a) > f (z)) and then decreasing (having f (b) < f (a)), therefore there must necessarily be a local maximum. The converse case f (0) > f (1) is treated similarly, using z the first point with f (z) = f (1), and proving the existence of a local maximum in [0, z]. Finally, in case f (0) = f (1), since f is not constant, there exists c such that f (c) 6= f (0). Then one can proceed with the same construction as above on either [0, c] or [c, 1]. 2 We are ready to state the following result. Theorem 3.3. For almost every ω, all local maxima of {Bt (ω), t ∈ [0, 1]} are strict and constitute a dense set in [0, 1]. Proof. Let us denote M [a, b] the maximum value taken by the Browian motion Bt as t varies in [a, b]. We claim that for consecutive intervals a < b < c < d, 3 Sample paths of the Brownian motion P(M [a, b] 6= M [c, d]) = 1. 35 (3.3) This clearly implies the thesis. Let us prove (3.3). Setting X = B(c) − B(b), Y = max{B(c + t) − B(c), t ∈ [0, d − c]}, we have {M [a, b] 6= M [c, d]} = {B(c) + Y 6= M [a, b]} = {X 6= M [a, b] − B(b) − Y }. Notice that Fb , X and Y are independent, M [a, b] and B(b) are Fb -measurable; therefore, denoting by µX the law of X, P(X 6= M [a, b] − B(b) − Y ) = P X + (M [a, b] − B(b) − Y ) = 0 Z = µX ⊗ µM [a,b]−B(b)−Y (dx dz) = 0 x+z=0 since X has a gaussian distribution and P(X 6= x) = 1. 2 The zero set of Brownian motion We have seen the link between behaviour of the Brownian motion for t near zero and the asymptotic behaviour as t → ∞. Before we proceed with the study of the set of zeroes for the Brownian motion, let us exploit some more this link. First, we propose the following exercise. Problem 3.2. 1. Prove claim (3.1): sup Bn = +∞, n∈N 2. Prove that lim t→+∞ inf Bn = −∞. n∈N Bt = 0 a.s. t ∞ Hint: write (supn |Bn | < ∞) = ∪ (supn |Bn | < k) so that the probability of the event on the left is bounded k=1 by the series of the probabilities on the right. But P(sup |Bn | < k) ≤ lim P(|Bn | < k) = lim P(|B1 | < k/n1/2 ) . . . n n n Let us consider the following consequence of (3.1). Writing Wt = tB1/t as t → 0+ , for almost every path there exist two sequences {tn } and {sn } converging to 0 such that tn+1 < sn+1 < tn < sn and W (tn ) > 0, W (sn ) < 0. Since the Brownian paths are almost surely continuous, there exists a sequence {τn } with tn < τn < sn and W (τn ) = 0. We have thus proved that Corollary 3.4. With probability one, Bronwian motion returns to the origin infinitely often. 36 3 Sample paths of the Brownian motion Let us introduce the random set Z(ω) = {t ≥ 0 : Bt (ω) = 0}. Notice that 0 ∈ Z(ω) for every ω, a.s., and previous corollary shows that, with probability one, 0 is an accumulation point for Z(ω). Theorem 3.5. With probability one, the set Z(ω) is closed, of Lebesgue measure 0, nowhere dense in R+ . Proof. For simplicity, we suppress the phrase “with probability one” in the following. Since Z(ω) = B(ω, ·)−1 (0) is the inverse of a closed set through a continuous mapping, it follows that it is closed. Let λ denote the Lebesgue measure on R+ ; using Fubini’s theorem Z ∞ P(ω : Bt (ω) = 0) dt = 0 E[λ(Z(ω))] = (λ × P){(t, ω) : Bt (ω) = 0} = 0 since Bt takes every given value with probability zero. Let I be an interval of the positive real line; since t 7→ Bt (ω) is continuous on I, if the set Z(ω) ∩ I were dense in I then necessarily Bt ≡ 0 on I, which is absurd, since (otherwise Bt would be differentiable). Hence Z(ω) is not dense in any interval I. 2 3.1 Regolarity of sample paths Of particular interest is the study of variations of sample paths of Brownian motion; this is connected with the possibility of defining a (Lebesgue-Stieltjes) integral with respect to each path. 3.1.1 Quadratic variation Let f : R+ → R be a given function; for any interval [s, t] ⊂ R+ and partition π = t0 = s < t1 < · · · < tn = t we set n 2 X qV (f, π, [s, t]) = f (ti ) − f (ti−1 ) . i=1 We denote kπk = max{ti − ti−1 }, ti−1 , ti ∈ π, the size of the partition; then we call f a bounded quadratic variation function on [s, t] if the following limit exists qV (f, [s, t]) = lim qV (f, π, [s, t]) < ∞. kπk→0 We begin presenting a link between quadratic variation and H¨ older continuity. Proposition 3.6. Fix an interval [s, t] ⊂ R+ ; then every function f : [s, t] → R H¨ older continuous of parameter α > 1/2 verifies qV (f, [s, t]) = 0. 3.1 Regolarity of sample paths 37 Proof. Take a partition π of [s, t]; then 2 n n 2 X X 2α f (ti ) − f (ti−1 ) (ti − ti−1 ) f (ti ) − f (ti−1 ) = α (ti − ti−1 ) i=1 i=1 ≤ n X i=1 ≤C 2 C 2 (ti − ti−1 )2α max |ti − ti−1 | 2α−1 ti−1 ,ti ∈π n X i=1 | hence |ti − ti−1 | {z } =t−s qV (f, π, [s, t]) ≤ C 2 (t − s)kπk2α−1 and, passing to the limit as kπk → 0, the thesis follows. 2 The next step is the study of quadratic variations for the trajectories t 7→ B t (ω) of a Brownian motion on an interval [s, t]. If we set A[s,t] (ω) = qV (B· (ω), [s, t]), it results to be a random variable Ft -measurable. Surprisingly enough, it happens that this random variable is constant with probability 1, and it holds A[s,t] = t − s. In order to simplify the proof, we give a weaker result, considering convergence in L 2 (from which convergence in probability and convergence almost surely for a subsequences follows). Proposition 3.7. Let B = {Bτ , τ ≥ 0} be a Brownian motion, [s, t] ⊂ R+ be a given interval and π a partition of [s, t]. Then lim qV (B· (ω), π, [s, t]) = t − s kπk→0 Proof. Since t − s = P k tk in L2 . − tk−1 we have qV (B· , π, [s, t]) − (t − s) = Xh k (Btk − Btk−1 )2 − (tk − tk−1 ) i and 2 X nh io ih E (Btk − Btk−1 )2 − (tk − tk−1 ) (Btj − Btj−1 )2 − (tj − tj−1 ) EqV (B· , π, [s, t]) − (t − s) = j,k i2 X h = E (Btk − Btk−1 )2 − (tk − tk−1 ) k since random variables (Btk − Btk−1 )2 − (tk − tk−1 ) and (Btj − Btj−1 )2 − (tj − tj−1 ), for j 6= k, are independent with mean 0, the expected value of their product is 0; hence 38 3 Sample paths of the Brownian motion " #2 2 X Btk − Btk−1 2 2 √ EqV (B· , π, [s, t]) − (t − s) = (tk − tk−1 ) E −1 tk − tk−1 k X =c (tk − tk−1 )2 k where, for each k, Btk −Btk−1 √ tk −tk−1 c=E " Then we have as kπk → 0. 2 is a standard Gaussian distribution, so that Btk − Btk−1 √ tk − tk−1 2 −1 #2 = Z 2 1 (x2 − 1)2 √ e−x /2 dx = 2. 2π R 2 EqV (B· , π, [s, t]) − (t − s) ≤ 2kπk(t − s) → 0 3.1.2 Bounded variation Let f : R+ → R be a given function; for any interval [s, t] and partition π = t0 = s < t1 < · · · < tn = t we set n X V (f, π, [s, t]) = f (ti ) − f (ti−1 ). i=1 We call f a function with bounded variation on [s, t] if the following limit exists V (f, [s, t]) = lim V (f, π, [s, t]) < ∞. kπk→0 We call f a function with total bounded variation if there exists V (f, [0, ∞)) < ∞. We also set V + (f, π, [s, t]) = n h X i=1 f (ti ) − f (ti−1 ) i+ , V − (f, π, [s, t]) = n h X i=1 f (ti ) − f (ti−1 ) i− , and similarly V + (f, [s, t]) = lim V + (f, π, [s, t]), kπk→0 V − (f, [s, t]) = lim V − (f, π, [s, t]). kπk→0 Take a function f with bounded variation, with f (0) = 0; we set V + (x) = V + (f, [0, x]) and V − (x) = V − (f, [0, x]), V (x) = V (f, [0, x]). Then we can write f as a difference between two increasing functions f (x) = V + (x) − V − (x) while the variation of f is given by V (x) = V + (x) + V − (x). 3.2 The law of the iterated logarithm 39 Proposition 3.8. Trajectories of Brownian motion are, with probability 1, of unbounded variation on every interval. Proof. Fix [a, b] ⊂ R+ and a partition π of [a, b]; the quadratic variation of B· (ω) is X qV (B· , π, [a, b]) = (Btk − Btk−1 )2 k ≤ max |Btk − Btk−1 | k X k |Btk − Btk−1 | (3.4) Since trajectories are continuous we have lim max |Btk − Btk−1 | = 0, kπk→0 k a.s. 0 Assume that there exists an event Ω[a,b] ∈ F, with positive probability, such that trajectories B· (ω) 0 have bounded variation for ω ∈ Ω[a,b] . Then for such ω, quadratic variation of the trajectory should be 0 according to (3.4), having a contradiction with Proposition 3.7. Hence there exists Ω[a,b] ∈ F with P(Ω[a,b] ) = 1 such that for ω ∈ Ω[a,b] the path t 7→ Bt (ω) has unbounded variation on [a, b]. We consider the sequence of intervals [a, b] ⊂ R+ with rational extreme points. Define the set [ ¯= Ω Ω[a,b] 0≤a≤b∈Q ¯ ∈ F and P(Ω) ¯ = 1. Therefore, for each ω 6∈ Ω, ¯ Bt (ω) has unbounded variation on it holds that Ω every nontrivial interval [s, t] with real valued extreme points (each such interval contains an interval [a, b] with rational end points, and the variation increases with the interval). This concludes the proof. 2 The above results, together with Proposition 3.6, imply the following property. Corollary 3.9. There exists a set of probability 1 such that the corresponding trajectories t 7→ B t (ω) are nowhere α-H¨ older continuous for arbitrary exponent α > 21 . 3.2 The law of the iterated logarithm We present in this section the fine results about regularity of Brownian sample paths, following the beautiful analysis of P. L´ evy. We will present there the law of iterated logarithm, and we show that the H¨ older exponent γ < 12 cannot be improved. The first result (the law of the iterated logarithm) describes the oscillations of the Brownian sample paths near zero and infinity. Theorem 3.10. For ω in a set of probability 1 max lim p t→0+ Bt (ω) 2t log log(1/t) = 1. (3.5) 40 3 Sample paths of the Brownian motion Before we proceed with the proof, we shall notice the following consequences of the theorem. By simmetry, since −Bt is again a Brownian motion, min lim p t→0+ Bt (ω) 2t log log(1/t) = −1, and time inversion provides the following max lim √ t→+∞ Bt (ω) = 1, 2t log log t min lim √ t→+∞ Bt (ω) = −1. 2t log log t For almost every trajectory, we can construct two sequences {tn } and {sn }, increasing and diverging to +∞, with sn < tn < sn+1 , such that p p B(sn ) ≤ − sn log log sn . B(tn ) ≥ tn log log tn , Then we see that the oscillations of every path increase more and more; further, due to continuity, we see that the trajectories touch each real value infinitely often. In preparation for the proof, we propose the following exercise. Problem 3.3. Compute the tail of the normal distribution: for every x > 0 we have Z ∞ 2 2 1 x −x2 /2 e ≤ e−y /2 dy ≤ e−x /2 . 1 + x2 x x (3.6) The next lemma will also be useful. Lemma 3.11. Consider on (Ω, F, P) a sequence {Xn , n ∈ N} of real Gaussian random variables, with zero mean, and define the partial sums Sn = X1 + · · · + Xn . Then, for any x > 0, it holds P(max Si ≥ x) ≤ 2P(Sn ≥ x). i≤n Proof. Note that max Si ≥ x i≤n where = n [ Ai , i=1 Ai = {ω ∈ Ω : S1 < x, . . . , Si−1 < x, Si ≥ x}. Define also the set B = {ω : Sn ≥ x}. Since Ai ∩ Aj = ∅, i 6= j, using independence, we find P(Ai ∩ B) ≥ P(Ai ∩ {ω : Sn ≥ Si }) = P(Ai )P({ω : Sn ≥ Si }) = P(Ai )P(Xi+1 + . . . + Xn ≥ 0). It is easy to check that P(Xi+1 + . . . + Xn ≥ 0) = therefore n X n 1 ; 2 1 1X P(Ai ) ≥ P P(B) ≥ P(Ai ∩ B) ≥ 2 2 i=1 i=1 and this gives the assertion. 2 n [ i=1 Ai ! 3.2 The law of the iterated logarithm 41 Corollary 3.12. (Maximal inequality for Brownian motion) For a Brownian motion {B t , t ≥ 0} we have: P(sup Br > x) ≤ 2P(Br > x), t ≥ 0. (3.7) r≤t Proof. Brownian motion has continuous paths, hence the probability on the left in (3.7) can be studied for τ varying in the set of dyadic rational numbers between s and t. Using that the increments of Brownian motion are independent and Gaussian distributed we obtain the assertion from the previous lemma. 2 Proof (Theorem 3.10). We shall prove first that max lim p t→0+ Bt (ω) 2t log log(1/t) ≤ 1. (3.8) p With the notation h(t) = log log(1/t) and fixed δ ∈ (0, 1), we choose θ ∈ (0, 1) such that λ = θ(1 + δ)2 > 1. Define a sequence of times tn = θn decreasing to 0 and consider the events An = max Bt > (1 + δ)h(t) . t∈[tn+1 ,tn ] If we prove that P P(An ) is converging, we have from Borel-Cantelli lemma that for ω outside a set n≥0 of measure 0, there exists n0 = n0 (ω) such that for any n ≥ n0 max t∈[tn+1 ,tn ] Bt (ω) < (1 + δ)h(t). This gives that max lim t→0+ Bt (ω) ≤ (1 + δ) h(t) and from the arbitrariety of δ it follows (3.8). To estimate the probability of An we notice the following inclusion An ⊂ max Bt > (1 + δ)h(tn+1 ) . t∈[0,tn ] Using (3.7) we have the following estimate on the probability of this event: Bt (1 + δ)h(tn+1 ) √ P({ max Bt > (1 + δ)h(tn+1 )}) ≤ 2P(Btn > (1 + δ)h(tn+1 )) ≤ 2P( √ n > ) t∈[0,tn ] tn tn the random variable on the left hand side has a Gaussian standard distribution, so we can use (3.6): ) √ n+1 and get set for simplicity xn = (1+δ)h(t tn P({ max Bt > (1 + δ)h(tn+1 )}) ≤ t∈[0,tn ] p 2/π 1 −x2n /2 e . xn 42 3 Sample paths of the Brownian motion Compute x2n : using the above notation we have xn = 2λ log[(n + 1) log(1/θ)] hence P(An ) ≤ C 1 . (n + 1)λ We prove next the converse inequality. The proof is based on an application of the second part of the Borel-Cantelli lemma. With the same notation of the first part, choose first ε and θ ∈ (0, 1); we define the events A0n = {Btn − Btn+1 ≥ (1 − ε)h(tn )}. P Let us prove that n P(A0n ) diverges. The idea is to use the left inequality of (3.6), which yields, for any x, the estimate Bt − Btn+1 1 1 −x2 /2 P( n > x) ≥ √ e tn − tn+1 2 2π x n) so, taking x = xn = (1 − ε) √th(t , we have n −tn+1 P( Btn − Btn+1 1 p . > x) ≥ C 2 /1−θ (1−ε) tn − tn+1 n log(n) If we take θ < 1 − (1 − ε)2 this term is the general term of a diverging series. Then by Borel-Cantelli lemma we have P(Btn − Btn+1 ≥ (1 − ε)h(tn ) infinitely often) = 1. By (3.8) applied to the Brownian motion −Bt , t ≥ 0, we know that Btn+1 ≥ −(1 + ε)htn+1 definitely as n → ∞. Hence we get for infinite indices n ht Btn ≥ (1 − ε)h(tn ) − (1 + ε)htn+1 = h(tn ) 1 − ε − (1 + ε) n+1 htn √ √ h = θ and choosing ε + (1 + ε) θ < δ we have Using the limit lim hθn+1 θn n→∞ Btn ≥ (1 − δ)h(tn ) infinitely often. 2 3.2.1 Modulus of continuity We have seen that Brownian sample paths are α-H¨ older continuous for every α < 12 and, by Corollary 3.9, they are nowhere α-H¨ older continuous for every α > 12 . This section will treat the limit case 1 α = 2. The modulus of continuity of a real function f : I → R, where I ⊂ R is a given interval, is defined as the function w(δ) = sup |f (x) − f (y)|. x,y∈I, |x−y|≤δ Of course, the modulus of continuity of the Brownian sample paths is bounded above by Cδ α for any older continuous for such α. On the other hand, it shall α < 21 , since the trajectories are nowehere α-H¨ p be larger that log log(1/δ) by the law of the iterated logarithm. The exact modulus of continuity for the Brownian sample paths is determined from the following theorem. We omit its proof but refer the interested reader, for instance, to [KS88, Theorem 2.9.5]. 3.3 The canonical space Theorem 3.13 (L´ evy). For every T > 0, on the interval I = [0, T ] we have ( ) |Bt − Bs | P lim+ sup = 1 = 1. δ→0 s<t∈I,|t−s|≤δ (2δ log(1/δ))1/2 43 (3.9) This result proves that, letting w(·, ω) be the modulus of continuity of Bt (ω) for t ∈ [0, T ], then it follows (almost surely) that w(δ, ω) = 1. lim δ→0+ (2δ log(1/δ))1/2 Therefore, every rajectory cannot be H¨ older continuous of exponent 1/2 on the time interval [0, T ], for every T . It also holds the following Corollary 3.14. There exists an event N of zero probability such that each Brownian sample path outsde this set is never H¨ older continuous of exponent α ≥ 1/2 in any time interval I ⊂ R + having non empty interior. Proof. Taking J = [q, r], for q < r ∈ Q+ , lim sup δ→0+ s<t∈J,|t−s|≤δ |Bt − Bs | = 1 a.s. (2δ log(1/δ))1/2 Therefore, if Nq,r is the negligible set where (3.10) is not verified, we obtain that N = (3.10) ∪ q<r∈Q+ Nq,r is again an event having probability zero. Now, the thesis follows: if ω outside N is such that the Brownian path t 7→ Bt (ω) is H¨ older continuous with exponent α ≥ 21 in an interval I with non empty internal part, there exists an interval [q, r] contained in I such that the Brownian path is H¨ older continuous with exponent α ≥ 12 in [q, r], and we have a contraddiction. 2 3.3 The canonical space Let us consider a real, standard Brownian motion {Bt , t ∈ [0, 1]}, defined on the space (Ω, F, {Ft }, P). Having chosen a finite interval helps to simplify some technical details and may be removed if necessary. Without loss of generality, we can assume that {Bt } has continuous paths (by taking a modification of the process, if necessary). We consider the mapping ξ : ω → {t 7→ Bt (ω)} which associates to each ω the trajectory, as an element in the space C = C0 ([0, 1]) of real valued, continuous functions vanishing at 0. C, endowed with the distance d(ω, η) = sup |ω(t) − η(t)| t∈[0,1] is a complete, separable metric space. Let G 0 be the Borel σ-field generated by the open sets of C. Define on the space C, the coordinate functions (or projections) X(t), X(t)(ω) = ω(t), for every t ∈ [0, 1] and ω ∈ C. We consider on C also the σ-algebra G generated by the finite-dimensional sets 44 3 Sample paths of the Brownian motion {ω ∈ C : ω(t1 ) ∈ A1 , . . . , ω(tn ) ∈ An }, with n ∈ N, t1 , . . . , tn ∈ [0, 1], Ai ∈ B(R), i = 1, . . . , n. It has the property that all the coordinate functions X(t) are G-measurable, and G is the smallest σ-field having this property. Lemma 3.15. G = G 0 . Proof. The inclusion G ⊂ G 0 is obvious; let us check the other one. The space C is separable, hence d(ω, η) = sup t∈Q∩[0,1] |ω(t) − η(t)|; this implies that the balls U = {ω ∈ C : |ωt − γt | ≤ ε ∀ t ∈ [0, 1]} are in G. But every open set A ∈ G 0 can be written as a countable union of balls, hence it is in G. 2 As a corollary we have the following result, whose prove we let to the reader. Problem 3.4. The mapping ξ : (Ω, F) → (C, G) is measurable, i.e., ξ is a random variable taking values in C. ˜ = P ◦ B −1 of Thanks to the above result, on the space (C, G) we can define the image measure P P under {Bt , t ≥ 0}, i.e., ˜ P(G) = P(ω ∈ Ω : t 7→ Bt (ω) ∈ G), G ∈ G. ˜ is called the Wiener measure. Define, on the probability space (C, G, P), the coordinate mapping P process X(t)(ω) = ω(t) for every t ∈ [0, 1] and ω ∈ C. X is equivalent to B and is often called the canonical Brownian motion. Addendum. A second construction of Brownian motion Here we propose an explicit way to construct a real Brownian motion. This construction goes back to to L´evy and Ciesielski (see, for instance, Karatzas and Shreve [KS88] or Zabczyk [Za04]). It does not require to appeal to the Kolmogorov existence theorem. We proceed in some steps and ask to complete the details as exercise. I step. We first define the Haar functions hk : [0, 1] → R, k ∈ N, h0 (t) = 1, t ∈ [0, 1], ( 1 for t ∈ [0, 1/2], h1 (t) = −1 for t ∈ (1/2, 1] and, if n ∈ N and 2n ≤ k < 2n+1 , hk (t) = n/2 2 −2 0 n/2 for t ∈ for t ∈ h i n k−2n k−2 +1/2 , n , n 2 2 i k−2n +1/2 k−2n +1 , 2n , 2n otherwise. 3.3 The canonical space 45 The sequence {hk } forms an orthonormal and complete system in L2 (0, 1). II step. Define the Schauder functions sk : [0, 1] → R, k ∈ N, Z t sk (t) = hk (s)ds, k ∈ N. 0 Note that all sk are non-negative; moreover we have ksk k∞ = sup |sk (t)| = 2n/2 t∈[0,1] 1 = 2−n/2−1 , 2n ≤ k < 2n+1 . 2n+1 (3.11) III step. Let us consider a probability space (Ω, F, P) on which it is well defined a sequence {X n , n ∈ N} of independent real Gaussian random variables such that the law of each Xn is N (0, 1). Q For instance, we can take Ω = Ri the product of infinite copies of the real line Ri = R, i ∈ N, i∈N endowed with the product σ-algebra F = B(R) ⊗ . . . ⊗ B(R) × . . .. Moreover, we take as measure P the infinite product of Gaussian laws N (0, 1). We define Xn (ω) = ωn , for any ω ∈ Ω, n ∈ N. It is clear that (Ω, F, P, {Xn }) satisfies our assertion. IV step. Consider the previous sequence of independent Gaussian random variables {X n }. One has that p |Xk (ω)| is O( log k) for k → ∞, a.s. √ Indeed, we have P(|Xk | > 4 log k) ≤ ce−4 log k = kc4 , k ∈ N, and, applying the √ Borel-Cantelli lemma, for any ω a.s., there exists k0 = k0 (ω) such that, if k ≥ k0 then |Xk (ω)| ≤ 4 log k. V step. Define Bt (ω) = X Xk (ω)sk (t), k≥0 ω ∈ Ω, t ∈ [0, 1]. One verifies that {Bt , t ∈ [0, 1]} is a Brownian motion (note that B0 (ω) = 0 for any ω). VI step. We extend the previous definition of Bt when t ∈ [0, ∞). Consider a sequence of infinite copies of the same probability space: (Ωk , Fk , Pk ) = (Ω, F, P), k≥0 where (Ω, F, P) is the probability space on which is defined Bt , t ∈ [0, 1]. Let us introduce the product probability space Y O O Ω˙ = Ωk , F˙ = Fk , P˙ = Pk . k≥0 Let us denote by Btk k≥0 k≥0 the Brownian motion on (Ωk , Fk , Pk ), t ∈ [0, 1]. Define in a recursive way: ( B˙ t = Bt1 , t ∈ [0, 1], n+1 ˙ ˙ Bt = Bn + Bt−n , t ∈ [n, n + 1], n ≥ 1. 46 3 Sample paths of the Brownian motion ˙ Hence, for ω = (ωk ) ∈ Ω, B˙ t (ω) = Bt1 (ω1 ), t ∈ [0, 1], 1 2 B˙ t (ω) = B (ω1 ) + B (ω2 ), 1 t−1 ... B˙ t (ω) = n X t ∈ [1, 2], n+1 B1k (ωk ) + Bt−n (ωn+1 ), k=1 t ∈ [n, n + 1]. ˙ ˙ F, ˙ P). One checks that {B˙ t , t ≥ 0} is a real Brownian motion on (Ω, Remark 3.16. Let {Bt , t ≥ 0} be a real standard Brownian motion defined on the probability space (Ω, F, P). The above construction suggests that, in order to have a n-dimensional Brownian motion, it is possible to proceed similarly. Consider a sequence of copies of the same probability space: (Ωk , Fk , Pk ) = (Ω, F, P), k = 1, . . . , n where (Ω, F, P) is the probability space on which is defined {Bt , t ≥ 0}. On the product probability space Ω˙ = n Y k=1 Ωk , F˙ = we define Wt = (Bt (ω1 ), . . . , Bt (ωn )), n O k=1 Fk , P˙ = n O Pk k=1 ˙ ω = (ω1 , . . . , ωn ) ∈ Ω, t ≥ 0. Then it is an easy computation to verify that {Wt , t ≥ 0} is an n-dimensional Brownian motion according to Definition 2.5. References [Ar74] L. Arnold, Stochastic differential equations: Theory and applications, (J. Wiley & Sons, 1974). [Ba00] P. Baldi, Equazioni differenziali stocastiche e applicazioni, (Pitagora Editrice, 2000). Seconda edizione. [Bau96] H. Bauer, Probability Theory, (De Gruyter, 1996). De Gruyter studies in mathematics, 23. [Bi95] P. Billingsley, Probability and measure, (Wiley, 1995). Third edition. Wiley Series in Probability and Mathematical Statistics. [BH91] N. Bouleau and F. Hirsch, Dirichlet forms and analysis on Wiener space, (De Gruyter, 1991). De Gruyter Studies in Mathematics, 14. [Ca96] P. Cattiaux, S. Roelly and H. Zessin, Une approche gibbsienne des diffusions browniennes infinidimensionnelles, Probab. Theory Related Fields, 104(2):147–179, 1996. [Ch82] K.-L. Chung, Lectures from Markov processes to Brownian motion, (Springer-Verlag, 1982). Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science], 249. [Ch02] K.-L. Chung, Brown, and probability & Brownian motion on the line, (World Scientific Publishing Co., 2002). [DD83] D. Dacunha-Castelle and M. Duflo, Probabilit´ es et statistiques, Tome 1 & 2, (Masson, 1983). [Da78] G. Da Prato, M. Iannelli & L. Tubaro, Dissipative functions and finite-dimensional stochastic differential equations, J. Math. pures et appl., 57:173–180, 1978. [Da98] G. Da Prato, Introduction to differential stochastic equations, (Scuola Normale Superiore Pisa, 1998). Collana “Appunti Scuola Normale Superiore”. [Da01] G. Da Prato, An introduction to infinite dimensional analysis, (Scuola Normale Superiore Pisa, 2001). Collana “Appunti Scuola Normale Superiore”. [Da03] G. Da Prato, Introduction to Brownian Motion and Mallivin Calculus, (Universit` a di Trento, Dipartimento di Matematica, 2003). Lecture Notes Series 17. [Da05] G. Da Prato, Introduction to Stochastic Analysis, (Scuola Normale Superiore, Pisa, 2005). [DZ92] G. Da Prato and J. Zabczyk, Stochastic equations in infinite dimensions, (Cambridge University Press, Cambridge, 1992). Encyclopedia of Mathematics and its Applications, 44. [Do53] J. L. Doob, Stochastic Processes, (Wiley, 1953). [Du02] R. M. Dudley, Real analysis and probability, (Cambridge University Press, Cambridge, 2002). Cambridge Studies in Advanced Mathematics, 74. Revised reprint of the 1989 original. [DY69] E. Dynkin and A. Yushkevich, Markov processes: Theorems and problems, (Plenum Press, New York 1969). [Fl93] F. Flandoli, Calcolo delle probabilit` a e applicazioni, appunti 1993. [Fr71] David Freedman, Brownian Motion and Diffusion, (Springer, 1983). 48 References [Fri75] A. Friedman, Stochastic differential equations and applications, (Academic Press, 1975). [GT82] Bernard Gaveau and Philip Trauber, L’int´egrale stochastique comme op´erateur de divergence dans l’espace fonctionnel. J. Funct. Anal., 46(2):230–238, 1982. ´ Pardoux, Int´egrales hilbertiennes anticipantes par rapport a [GP92] A. Grorud and E. ` un processus de Wiener cylindrique et calcul stochastique associ´e. Appl. Math. Optim., 25(1):31–49, 1992. [Hi80] T. Hida, Brownian motion, (Springer-Verlag, New York, 1980), Applications of Mathematics, 11. [Hi93] T. Hida, H.-H. Kuo, J. Potthoff, and L. Streit, White noise, (Kluwer Academic Publishers Group, Dordrecht, 1993). Mathematics and its Applications, 253. [Ho96] H. Holden, B. Oksendal, J. Uboe, and T. Zhang, Stochastic partial differential equations, (Birkh¨ auser Boston Inc., Boston, MA, 1996). [IW81] N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, (NorthHolland Publishing Co., Amsterdam-New York; Kodansha, Ltd., Tokyo, 1981). North-Holland Mathematical Library, 24. [JP00] J. Jacod and P. Protter, Probability essentials, (Springer-Verlag, Berlin, 2000). Universitext. [Ka02] O. Kallenberg, Foundations of Modern Probability, (Springer-Verlag, New York, 2002). Second edition. [KS88] I. Karatzas and S. Shreve, Brownian motion and stochastic calculus, (Springer-Verlag, New York, 1988). Graduate Texts in Mathematics, 113. [LL97] D. Lamberton and B. Lapeyre, Introduction au calcul stochastique applique´ ea la finance, (Ellipses, Edition Marketing, Paris, 1997). Second edition. [Let93] G. Letta, Probabilit` a elementare, (Zanichelli Editore, 1993). [L´e54] P. L´evy, Le Mouvement Brownien, (Gauthier Villars, 1954). [L´e65] P. L´evy, Processus Stochastiques et Mouvement Brownien, (Gauthier Villars, 1965). [Ma97] Paul Malliavin, Stochastic analysis, (Springer-Verlag, Berlin, 1997), Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 313. [Ne01] E. Nelson, Dynamical Theories of Brownian Motion, second edition, Princeton University Press, August 2001. Posted on the Web at http://www.math.princeton.edu/ nelson/books.html [Nu95] D. Nualart, The Malliavin calculus and related topics, (Springer, 1995). [NP88] D. Nualart and E. Pardoux, Stochastic calculus with anticipating integrands, Probab. Theory Rel. Fields, 78:535–581, 1988. [NZ86] D. Nualart and M. Zakai, Generalized multiple stochastic integrals and the representation of Wiener functionals, Stochastics, 23:311–330. [NZ89] D. Nualart and M. Zakai, The partial Malliavin calculus, in Seminaire de Probabilit´ es XXIII, (Springer, 1989). Lecture Notes in Math., 1372. [MR04] S. Maniglia and A. Rhandi, Gaussian measures on separable Hilbert spaces and applications, (Edizioni del Grifo, Quaderno 1/2004). [Pr04] P. E. Protter, Stochastic integration and differential equations, (Springer-Verlag, Berlin, 2004). Second edition. [Sk75] A. V. Skorohod, On a generalization of the stochastic integral, Teor. Verojatnost. i Primenen., 20(2):223–238, 1975. [Ta88] Kazuaki Taira, Diffusion processes and partial differential equations, (Academic Press, 1988). [Za04] J. Zabczyk, Topics in Stochastic Processes, (Scuola Normale Superiore Pisa, 2004). Collana “Quaderni”.