Download AN OSTROWSKI`S TYPE INEQUALITY FOR A RANDOM VARIABLE

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Nonlinear Analysis Forum 5 (2000), pp. 125–135
AN OSTROWSKI’S TYPE INEQUALITY FOR A
RANDOM VARIABLE WHOSE PROBABILITY
DENSITY FUNCTION BELONGS TO L∞ [a, b]
N. S. Barnett and S. S. Dragomir
School of Communications and Informatics
Victoria University of Technology
P.O. Box 14428, MCMC Melbourne
Victoria, 8001, Australia
E-mail : {neil,sever}@matilda.vu.edu.au
Abstract. An inequality of Ostrowski’s type for a random variable whose probability density function is in L∞ [a, b] in terms of
the cumulative distribution function and expectation is given. An
application for a Beta random variable is also given.
1. Introduction
The following theorem contains the integral inequality which is known
in the literature as Ostrowski’s inequality [2, p. 469].
Theorem 1. Let f : I ⊆ R → R be a differentiable mapping in I ◦
(I ◦ is the interior of I), and let a, b ∈ I ◦ with a < b. If f 0 : (a, b) → R
is bounded on (a, b) , i.e., kf 0 k∞ := sup |f 0 (t)| < ∞, then we have
t∈(a,b)
the following inequality
"
#
Z b
a+b 2
(x
−
)
1
1
2
(b − a)kf 0 k∞
+
f (t)dt ≤
f (x) −
2
b−a a
4
(b − a)
(1.1)
Received June 15, 1999 ; accepted April 10, 2000.
1991 Mathematics Subject Classification: Primary 26D15; Secondary 65 99Axx.
Key words and phrases: Ostrowski’s type inequalities, probability density functions, cumulative distribution functions, expectations, Beta random variable.
126
N. S. Barnett and S. S. Dragomir
for all x ∈ [a, b]. The constant
be replaced by a smaller one.
1
4
is sharp in the sense that it can not
In [1], Dragomir and Wang applied Ostrowski’s inequality in numerical analysis obtaining an estimation of the error bound for the
quadrature rules of Riemann type in terms of the infinity norm k · k∞ .
Applications for special means : logarithmic mean, identric mean, plogarithmic mean and so on, were also given.
The main aim of this paper is to give an Ostrowski’s type inequality for random variables whose probability density functions are in
L∞ [a, b]. An application for a Beta Random Variable is also given.
2. The results
Let X be a random variable with the probability density function
f : [a, b] ⊂ R → R+ and with cumulative distribution function F (x) =
P r(X ≤ x).
The following theorem holds
Theorem 2. Let f ∈ L∞ [a, b] and put kf k∞ = sup f (t) < ∞.
t∈[a,b]
Then we have the following inequality
#
"
a+b 2
(x
−
)
b
−
E(X)
1
2
≤
P r(X ≤ x) −
(b − a)kf k∞
+
b−a 4
(b − a)2
(2.1)
or equivalently,
#
"
a+b 2
(x
−
)
E(X)
−
a
1
2
≤
P r(X ≥ x) −
(b − a)kf k ∞
+
b−a 4
(b − a)2
for all x ∈ [a, b]. The constant
1
4
in (2.1) and (2.2) is sharp.
Proof. Let x, y ∈ [a, b]. Then
Z
|F (x) − F (y)| = y
x
f (t)dt ≤ |x − y| kf k∞
which shows that F is kf k∞ -Lipschitzian on [a, b].
(2.2)
Ostrowski’s type inequality for a random variable
127
2
Consider the kernel p : [a, b] → R given by
p(x, t) :=
(
t − a if t ∈ [a, x]
t − b if t ∈ (x, b].
Rb
Then the Riemann-Stieltjes integral a p(x, t)dF (t) exists for any
x ∈ [a, b] and the formula of integration by parts for Riemann-Stieltjes
integral gives :
Z
b
p(x, t)dF (t)
a
=
Z
Z
x
(t − a)dF (t) +
a
= (t −
a)F (t)|ba
−
= (b − a)F (x) −
Z
b
(t − b)dF (t)
x
x
F (t)dt + (t −
b)F (t)|bx
−
a
b
Z
Z
(2.3)
b
F (t)dt
x
F (t)dt.
a
The integration by parts formula for Riemann-Stieltjes integral also
gives
E(X) =
Z
b
tdF (t) =
tF (t)|ba
a
= bF (b) − aF (a) −
−
Z
b
F (t)dt
a
Z
b
F (t)dt = b −
a
Z
(2.4)
b
F (t)dt.
a
Now, using (2.3) and (2.4), we get the following equality
(b − a)F (x) + E(X) − b =
Z
b
p(x, t)dF (t),
a
for all x ∈ [a, b].
Now, assume that
(n)
∆n := a = x0
(n)
< x1
(n)
< · · · < xn−1 < x(n)
n =b
(2.5)
128
N. S. Barnett and S. S. Dragomir
is a sequence of divisions with ν(∆n ) → 0 as n → ∞, where
ν(∆n ) := max
n
(n)
(n)
xi+1 − xi
o
: i = 0, . . . , n − 1 .
If p : [a, b] → R is Riemann integrable on [a, b] and ν : [a, b] → R is
L-Lipschitzian (Lipschitzian with the constant L), then we have
Z
b
p(x)dν(x)
a
n−1
h i
X
(n)
(n)
(n)
p ξi
ν xi+1 − ν xi
= lim
ν(∆n )→0
i=0
(n) (2.6)
n−1
ν x(n)
−
ν
x
X (n) (n)
i+1
i
(n) ≤ lim
xi+1 − xi
p ξi
(n)
(n)
ν(∆n )→0
xi+1 − xi
i=0
Z b
n−1
X (n) (n)
(n)
=L
≤ L lim
|p(x)| dx.
xi+1 − xi
p ξi
ν(∆n )→0
i=0
a
Applying the inequality (2.6) for the mappings p(x, ·) and F (·), we
get
Z
Z b
b
p(x, t)dF (t) ≤ kf k∞
|p(x, t)| dt
a
a
#
#
"Z
"
Z b
2
2
x
(x − a) + (b − x)
(t − a)dt +
(b − t)dt = kf k∞
= kf k∞
2
a
x
#
"
2
1
a+b
2
kf k∞
=
(b − a) + x −
4
2
for all x ∈ [a, b].
Finally, by the identity (2.5) we deduce that for all x ∈ [a, b],
#
"
a+b 2
x
−
b
−
E(X)
1
2
F (x) −
≤
(b − a)kf k∞
+
2
b−a 4
(b − a)
Ostrowski’s type inequality for a random variable
129
which proves (2.1).
Now, taking into account the fact that
P r(X ≥ x) = 1 − P r(X ≤ x),
the inequality (2.2) is also obtained.
1
To prove the sharpness of the constant 4,
assume that the inequality
(2.1) holds with a constant c > 0, that is,
#
"
a+b 2
x
−
b
−
E(X)
2
P r(X ≤ x) −
≤ c+
(b − a)kf k∞
2
b−a (b − a)
(2.7)
for all x ∈ [a, b].
Assume that X0 is a random variable having the probability density
function F0 : [0, 1] → R given by f0 (t) = 1. Then we find that
P r(X0 ≥ x) = x, (x ∈ [0, 1]), E(X0 ) =
1
, and kf0 k∞ = 1.
2
Consequently, (2.7) becomes
x −
2
1 1
for all x ∈ [0, 1].
≤c+ x−
2
2
Choosing x = 0, we get c ≥
thus proved.
1
4
and the sharpness of the constant is
Corollary 1. Under the above assumptions, we have the following
double inequality
1
1
2
2
b − (b − a) kf k∞ ≤ E(X) ≤ a + (b − a) kf k∞ .
2
2
(2.8)
130
N. S. Barnett and S. S. Dragomir
Proof. We know that
a ≤ E(X) ≤ b.
Now, choose x = a in (2.1) to obtain
b − E(X) 1
b − a ≤ 2 (b − a)kf k∞
that is,
1
2
(b − a) kf k∞
2
which is equivalent to the first inequality in (2.8).
Also, choose x = b in (2.1) to get
1
b
−
E(X)
≤ (b − a)kf k
1 −
∞
b−a 2
that is,
1
2
E(X) − a ≤ (b − a) kf k∞ ,
2
which proves the second inequality (2.8).
b − E(X) ≤
Remark 1. We know that
Z b
1=
f (x)dx ≤ (b − a)kf k∞
a
which gives us
1
.
b−a
Now, if we assume that kf k∞ is not too large, say,
2
,
(2.9)
kf k∞ ≤
b−a
then we see that
1
2
a + (b − a) kf k∞ ≤ b
2
and
1
2
b − (b − a) kf k∞ ≥ a,
2
which shows that the inequality (2.8) is a tighter inequality than a ≤
E(X) ≤ b when (2.9) holds.
Another equivalent inequality to (2.8) which can be more useful in
practice is the following one.
kf k∞ ≥
Ostrowski’s type inequality for a random variable
131
Corollary 2. With the above assumptions, we have the following
inequality
2 E(X) − a + b ≤ (b − a) kf k − 1
.
∞
2 2
b−a
(2.10)
Proof. From the inequality (2.8) we have
b−
a+b 1
a+b 1
a+b
2
2
− (b − a) kf k∞ ≤ E(X)−
≤ a−
+ (b − a) kf k∞ ,
2
2
2
2
2
that is,
2 (b − a)
1
−
kf k∞ −
2
b−a
a+b
≤ E(X) −
2
2 (b − a)
1
≤
kf k∞ −
,
2
b−a
which is exactly (2.10).
This corollary provides the possibility for finding a sufficient condition in terms of kf k∞ for the expectation E(X) to be close to the
mean value a+b
2 .
Corollary 3. Let X and f be as above and ε > 0. If
kf k∞ ≤
then we obtain
2ε
1
+
b − a (b − a)2
a
+
b
E(X) −
≤ ε.
2 Proof. The proof is obvious and we shall omit the details.
The following corollary of Theorem 2 also holds.
(2.11)
132
N. S. Barnett and S. S. Dragomir
Corollary 4. Let X and f be as above. Then we have the following inequality
a
+
b
1
P r X ≤
−
2
2
1 1
a + b (2.12)
≤ (b − a)kf k∞ +
E(X) −
4
b−a 2 1
3
≤ (b − a)kf k∞ − .
4
2
Proof. If we choose x = a+b
in (2.1), we get
2
P r X ≤ a + b − b − E(X) ≤ 1 (b − a)kf k ,
∞
2
b−a 4
which is clearly equivalent to
a + b 1
P r X ≤ a + b − 1 + 1
E(X) −
≤ 4 (b − a)kf k∞ .
2
2 b−a
2
Now, using the triangle inequality, we get
P r X ≤ a + b − 1 2
2
1
1
a
+
b
a
+
b
− +
E(X) −
= P r X ≤
2
2 b−a
2
a + b 1
E(X) −
−
b−a
2
a+b
1
1
a + b − +
E(X) −
≤ P r X ≤
2
2 b−a
2
1
a + b E(X) −
+ b−a
2
1 1
1
a + b 3
≤ (b − a)kf k∞ +
E(X) −
≤ (b − a)kf k∞ −
4
b−a
2
4
2
and the desired inequality is proved.
Ostrowski’s type inequality for a random variable
133
Remark 2. A similar result applies for
a+b
Pr X ≥
.
2
We shall omit the details.
Finally, the following result holds.
Corollary 5. Let X and f be as above. Then we have the following inequality
a
+
b
E(X) −
2 (2.13)
1
a+b
1 2
≤ (b − a) kf k∞ + (b − a) P r X ≤
− .
4
2
2
Proof. As in the above Corollary 4, we have
a + b 1 E(X) −
b−a
2 1
1
a + b a+b
− +
E(X) −
≤ P r X ≤
2
2 b−a
2
a+b
1
+ P r X ≤
− 2
2
1 a+b
1
− ,
≤ (b − a)kf k∞ + P r X ≤
4
2
2
from which we get (2.13).
Remark 3. If we assume that f ∈ C[a, b], then F is differentiable
on (a, b) and we get, in view of Ostrowski’s inequality (1.1),
"
2 #
Z b
x − a+b
1
1
2
(b − a)kf k∞
+
F (t)dt ≤
F (x) −
2
b−a a
4
(b − a)
for all x ∈ [a, b].
134
N. S. Barnett and S. S. Dragomir
Now, using the identity (2.4) we recapture the inequalities (2.1)
and (2.2) for random variables whose probability density functions are
continuous on [a, b].
3. Application for a Beta random variable
A Beta random variable X, with parameters (p, q) has the probability density function
f (x : p, q) :=
xp−1 (1 − x)
B(p, q)
q−1
; (0 < x < 1),
where
Ω = {(p, q) : p, q > 0}
and B(p, q) is the Beta function defined by
Z 1
q−1
B(p, q) :=
tp−1 (1 − t) dt.
0
We observe that for 0 < p < 1,
kf (·, p, q)k ∞ = sup
x∈(0,1)
"
xp−1 (1 − x)
B(p, q)
q−1
#
= ∞.
Assume that p, q ≥ 1. Then we find that
h
i
df (x, p, q)
1
q−1
q−2
=
(p − 1)xp−2 (1 − x)
− (q − 1)xp−1 (1 − x)
dx
B(p, q)
xp−2 (1 − x)
B(p, q)
q−2
xp−2 (1 − x)
=
B(p, q)
q−2
=
[(p − 1)(1 − x) − (q − 1)x]
[−(p + q − 2)x + (p − 1)] .
We observe that for p, q > 1,
df (x,p,q)
dx
= 0 if and only if x0 =
We therefore have df (x,p,q)
> 0 on (0, x0 ) and
dx
Consequently, we see that
kf (·, p, q)k∞ = f (x0 ; p, q) =
(p − 1)
df (x,p,q)
dx
p−1
< 0 on (x0 , 1).
(q − 1)
B(p, q)(p + q − 2)
p−1
p+q−2 .
q−1
p+q−2 .
Ostrowski’s type inequality for a random variable
135
On the other hand, we have
Z 1
1
B(p + 1, q)
q−1
.
E(X) =
x · xp−1 (1 − x) dx =
B(p, q) 0
B(p, q)
Upon employing familiar relationships B(p, q) = Γ(p)Γ(q)
Γ(p+q) and Γ(z +
1) = zΓ(z), (z ∈ C{−1, −2, −3, . . . }), where Γ denotes the well-known
Gamma Function, it is easy to see that
p
.
E(X) =
p+q
Finally, using Theorem 2, we can state the following proposition.
Proposition 1. Let X be a Beta random variable with the parameters (p, q), (p, q) ∈ [1, ∞) × [1, ∞). Then we have the following
inequality
"
2 #
p−1
q−1
(p − 1) (q − 1)
1
1
q
≤
P r(X ≤ x) −
×
+ x−
p+q−2
p + q
4
2
B(p, q)(p + q − 2)
and
P r(X ≥ x) −
"
2 #
p−1
q−1
1
(p − 1) (q − 1)
p 1
+ x−
×
≤
p+q−2 ,
p + q
4
2
B(p, q)(p + q − 2)
where x ∈ [0, 1]. In particular, we have
p−1
q−1
1
1
q
(q − 1)
P r(X ≤ ) −
≤ · (p − 1)
2
p + q 4 B(p, q)(p + q − 2)p+q−2
and
p−1
q−1
1
p
(q − 1)
1
P r(X ≥ ) −
≤ · (p − 1)
.
2
p + q 4 B(p, q)(p + q − 2)p+q−2
References
[1] S.S. Dragomir and S. Wang, Applications of Ostrowski’s inequality to the estimation of error bounds for some special means and some numerical quadrature
rules, Appl. Math. Lett. 11 (1998), 105–109.
[2] D.S. Mitrinovic̆, J.E. Pec̆aric̆ and A.M. Fink, Inequalities for Functions and
Their Integrals and Derivatives, Kluwer Academic Publishers, Dordrecht, 1994.
Related documents