* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Slides Chapter 3. Laws of large numbers
Vincent's theorem wikipedia , lookup
Infinitesimal wikipedia , lookup
Large numbers wikipedia , lookup
Elementary mathematics wikipedia , lookup
Infinite monkey theorem wikipedia , lookup
Wiles's proof of Fermat's Last Theorem wikipedia , lookup
Fermat's Last Theorem wikipedia , lookup
List of important publications in mathematics wikipedia , lookup
Non-standard analysis wikipedia , lookup
Karhunen–Loève theorem wikipedia , lookup
Series (mathematics) wikipedia , lookup
Four color theorem wikipedia , lookup
Nyquist–Shannon sampling theorem wikipedia , lookup
Georg Cantor's first set theory article wikipedia , lookup
Brouwer fixed-point theorem wikipedia , lookup
Tweedie distribution wikipedia , lookup
Fundamental theorem of algebra wikipedia , lookup
Chapter 3 Laws of large numbers The laws of large numbers are a collection of theorems that establish the convergence, in some of the ways already studied, of P sequences of the type {n−1 ni=1 Xi − an}, where an is a conP stant generally given by an = n−1 ni=1 E(Xi). These theorems are classified as weak or strong laws, depending on whether the convergence is in probability or almost surely. 3.1 Weak laws of large numbers Theorem 3.1 (Tchebychev’s Theorem) Let {Xn}n∈IN be a sequence of independent r.v.s (not necessarily identically distributed) such that V (Xn) ≤ M < ∞, ∀n ∈ IN . Then, n n 1X 1X P Xi − E(Xi) → 0. n i=1 n i=1 Corolary 3.1 (Tchebychev’s Theorem for r.v.s with equal mean) In the conditions of Theorem 3.1, if E(Xn) = µ, ∀n ∈ IN , then n 1X P Xi → µ. n i=1 1 3.2. STRONG LAW OF LARGE NUMBERS Thus, the sample mean converges weakly to the population mean. Historically, the next corollary was the first law of large numbers. Corolary 3.2 (Bernouilli’s Theorem) Let {Xn}n∈IN be a sequence of i.i.d. r.v.s distributed as Bern(p). Then, n 1X P Xi → p. n i=1 The next theorem does not require the existence of the variances, but in turn requires the r.v.s to be identically distributed. Theorem 3.2 (Khintchine’s weak law of large numbers) Let {Xn}n∈IN be a sequence of i.i.d. r.v.s with mean E(Xn) = µ ∈ (−∞, ∞). Then, n 1X P Xi → µ. n i=1 3.2 Strong law of large numbers Lemma 3.1 (Kolmogorov’s bound) Let {Xn}n∈IN be a sequence of independent r.v.s with mean E(Xn) = P µn and V (Xn) = σn2 , both finite. Let Sn = ni=1 Xi and c2n = Pn 2 i=1 σi . Then, it holds that for all H > 0, ! n [ 1 P {ω ∈ Ω : |Sk (ω) − E(Sk )| ≥ Hcn} ≤ 2 . H k=1 Theorem 3.3 (Kolmogorov’s strong law of large numbers) Let {Xn}n∈IN be a sequence of independent r.v.s with mean E(Xn) = ISABEL MOLINA 2 3.2. STRONG LAW OF LARGE NUMBERS µn and V (Xn) = σn2 , both finite. If ∞ X σ2 n=1 n n2 n n 1X 1 X a.s. < ∞, then Xi − µi → 0. n i=1 n i=1 Corolary 3.3 (Borel-Cantelli’s Theorem) Let {Xn}n∈IN be a sequence of i.i.d. r.v.s distributed as Bern(p). Then, n 1X a.s. Xi → p. n i=1 This theorem says that the relative frequency of a dichotomic event goes almost surely to the probability of the event. Finally, the next strong law does not require anything to the variances but it assumes that the r.v.s are i.i.d. Theorem 3.4 (Khintchine’s strong law of large numbers) Let {Xn}n∈IN be a sequence of i.i.d. r.v.s with E(Xn) = µ < ∞. Then n 1X a.s. Xi → µ. n i=1 ISABEL MOLINA 3