Download Solving Random Differential Equations by Means of Differential

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Lp space wikipedia , lookup

Series (mathematics) wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Partial differential equation wikipedia , lookup

Transcript
Advances in Dynamical Systems and Applications
ISSN 0973-5321, Volume 8, Number 2, pp. 413–425 (2013)
http://campus.mst.edu/adsa
Solving Random Differential Equations
by Means of Differential Transform Methods
L. Villafuerte
CEFyMAP, Universidad Autónoma de Chiapas
Calle 4a. Ote. Nte. No. 1428, Tuxtla Gutiérrez
Chiapas, C.P. 29000, México
[email protected]
and
J.–C. Cortés
Instituto Universitario de Matemática Multidisciplinar
Universitat Politècnica de València
Edificio 8G, 46022, Valencia, Spain
[email protected]
Abstract
In this paper the random Differential Transform Method (DTM) is used to solve
a time-dependent random linear differential equation. The mean square convergence of the random DTM is proven. Approximations to the mean and standard
deviation functions of the solution stochastic process are shown in several illustrative examples.
AMS Subject Classifications: 65L20, 60G15, 39A10.
Keywords: p-stochastic calculus, random linear differential-difference equations, random differential transform method.
1
Introduction
Differential transform method (DTM) is a semi-analytical method for obtaining mainly,
the solution of differential equations in the form of a Taylor series. Although similar to
the Taylor expansion which requires the computation of derivatives of the data functions,
DTM constructs an analytical solution in the form of polynomials (truncated infinite
Received October 29, 2012; Accepted January 16, 2013
Communicated by Eugenia N. Petropoulou
414
L. Villafuerte and J.–C. Cortés
series) which involves less computational efforts. In the DTM, certain transformation
rules are applied and the governing differential equation and initial/boundary conditions
are transformed into a set of algebraic equations in terms of the differential transforms
of the original functions. The solution of these algebraic equations gives the desired
solution of the problem. The basic idea of DTM was introduced by Zhou [9]. The
differential transform method (DTM) has been successfully applied to a wide class of
deterministic differential equations arising in many areas of sciences and engineering.
Since many physical and engineering problems are more faithfully modeled by random
differential equations, a random DTM method based on mean fourth stochastic calculus
has been developed in [8].
In this paper, we consider the random time-dependent linear differential equation:

n

 Ẋ(t) = P (t)X(t) + B(t); P (t) := X A ti , 0 ≤ t ≤ T,
n ≥ 0,
n
n
i
(1.1)
i=0

 X(0) = X ,
0
where n is a nonnegative integer, B(t) is a stochastic process and Ai , 0 ≤ i ≤ n, and
X0 are random variables satisfying certain conditions to be specified later.
The aim of the paper is to give sufficient conditions on the data in such a way that a
mean square solution of (1.1) can be expanded as
X(t) =
∞
X
X̂(k)tk ,
0 ≤ t ≤ T,
(1.2)
k=0
where X̂(k) are random variables. In order to achieve our goal, a random DTM is
applied to (1.1) taking advantage of results presented in paper [8]. Similar equations
have been studied by the authors in [4,5], but using different approaches. As we will see
later, a major advantage of having a solution of the form (1.2) is that it allows to obtain
in a practical way, the main statistical functions, namely, expectation and variance (or
equivalently standard deviation), of the solution stochastic process.
The paper is organized as follows. Section 2 deals with some definitions, notations
and results that clarify the presentation of the paper. Section 3 is addressed to establish
sufficient conditions on the data of initial value problem (1.1) in order to obtain a mean
square solution of the form (1.2) by applying the random DTM. Several illustrative
examples are included in the last section.
2
Preliminaries
This section begins by reviewing some important concepts, definitions and results related to the stochastic calculus that will play an important role throughout the paper.
Further details can be found in [1, 6, 7]. Let (Ω, F, P) be a probability space, that is, a
triplet consisting of a space Ω of points ω, a σ-field F of measurable sets A in Ω and,
415
Solving Random Differential Equations
a probability measure P defined on F, [6]. Let p ≥ 1 be a real number. A real random
variable U defined on (Ω, F, P) is called of order p (p-r. v.), if
E[|U |p ] < ∞ ,
where E[ ] denotes the expectation operator. The space Lp of all the p-r. v.’s, endowed
with the norm
1/p
kU kp = (E[|U |p ]) ,
is a Banach space, [1, p.9]. Let {Un : n ≥ 0} be a sequence of p-r. v.’s. We say that it
is convergent in the pth mean to the p-r. v. U ∈ Lp , if
lim kUn − U kp = 0.
n→∞
If q > p ≥ 1 and {Un : n ≥ 0} is a convergent sequence in Lq , that is, qth mean
convergent to U ∈ Lq , then {Un : n ≥ 0} lies in Lp and it is pth mean convergent to
U ∈ Lp , [1, p.13].
In this paper, we are mainly interested in the mean square (m. s.) and mean fourth
(m. f.) calculus, corresponding to p = 2 and p = 4 respectively, but results related with
p = 8, 16 will also be used. For p = 2, the space (L2 , k·k2 ) is not a Banach algebra,
i. e., the inequality kU V k2 ≤ kU k2 kV k2 does not hold, in general. The space L2 is a
Hilbert space and one satisfies the Schwarz inequality
1/2 2 1/2
hU, V i = E[|U V |] ≤ E U 2
E V
= kU k2 kV k2 .
(2.1)
Clearly, if the p-r. v.’s U and V are independent, one gets: kU V kp = kU kp kV kp . However, independence cannot be assumed in many applications. The following inequalities
are particularly useful to overcome such situation:
kU V kp ≤ kU k2p kV k2p ,
∀U, V ∈ L2p ,
which is a direct application of the Schwarz inequality, and
m
m Y
m1
Y
,
with E [(Ui )mp ] < ∞,
k(Ui )m kp
Ui ≤
i=1
p
(2.2)
1 ≤ i ≤ m,
(2.3)
i=1
which is proven in [2]. Let Ui be p-r. v.’s, the set of m-dimensional random vectors
~ = (U1 , U2 , . . . , Um )T together with the norm
U
~ kp,v = max kUi kp
kU
1≤i≤m
(2.4)
will be denoted by (Lm
p , k · kp,v ). The set of m × m-dimensional random matrices
A = (Ai,j ), which entries Ai,j ∈ Lp with the norm
kAkp,m =
m X
m
X
i=1 j=1
kAi,j kp ,
(2.5)
416
L. Villafuerte and J.–C. Cortés
will be denoted by (Lm×m
, k · kp,m ). By using (2.2), it is easy to prove that
p
~ kp,v ≤ kAk2p,m kU
~ k2p,v .
kAU
(2.6)
Let T be an interval of the real line. If E[|U (t)|p ] < +∞ for all t ∈ T , then
{U (t) : t ∈ T } is called a stochastic process of order p (p-s. p.). If there exists a
dU (t)
stochastic process of order p, denoted by U̇ (t), U (1) (t) or
, such that
dt
U (t + h) − U (t) dU (t) → 0 as h → 0,
−
h
dt p
then {U (t) : t ∈ T } is said to be pth mean differentiable at t ∈ T .
Furthermore, a p-s. p. {U (t) : |t| < c} is pth mean analytic on |t| < c if it can be
expanded in the pth mean convergent Taylor series
U (t) =
∞
X
U (n) (0)
n=0
n!
tn ,
(2.7)
where U (n) (0) denotes the derivative of order n of the s. p. U (t) evaluated at the point
t = 0, in the pth mean sense. In connection with the pth mean analyticity, we state
without proof the following result that can be found in [3].
Proposition 2.1. Let {U (t) : |t| < c} be a pth mean analytic s. p. given by
U (t) =
∞
X
Uk tk ,
Uk =
k=0
U (k) (0)
,
k!
|t| < c,
where the derivatives are considered in the pth mean sense. Then, there exists M > 0
such that
M
kUk kp ≤ k ,
0 < ρ < c, ∀k ≥ 0 integer.
(2.8)
ρ
We end this section, with a short presentation of the random differential transform
method and some of its important operational results that can be found in [8].
The random differential transform of an m. s. analytic s. p. U (t) is defined as
1 dk (U (t))
, k ≥ 0, integer,
(2.9)
Û (k) =
k!
dtk
t=0
where Û is the transformed s. p. and
dk
denotes the m. s. derivative of order k. The
dtk
inverse transform of Û is defined as
U (t) =
∞
X
Û (k)tk .
(2.10)
k=0
The next result provides some operational rules for the random differential transform of
stochastic processes that will be required later.
417
Solving Random Differential Equations
Theorem 2.2. Let {F (t) : |t| < c}, {G(t) : |t| < c} be m. f. analytic s. p.’s. Then the
following results hold:
(i) If U (t) = F (t) ± G(t), then Û (k) = F̂ (k) ± Ĝ(k).
(ii) Let A be a 4-r. v.. If U (t) = AF (t), then Û (k) = AF̂ (k).
(iii) Let m be a nonnegative integer.
dm (F (t))
If U (t) =
, then Û (k) = (k + 1) . . . (k + m)F̂ (k + m).
dtm
(iv) If U (t) = F (t)G(t), then Û (k) =
k
X
F̂ (n)Ĝ(k − n).
n=0
3
Application of the Random DTM
By applying the operational properties of random DTM given in Theorem 2.2 to the
initial value problem (1.1), one obtains
(k + 1)X̂(k + 1) =
k
X
Pˆn (r)X̂(k − r) + B̂(k),
(3.1)
r=0
being
Pˆn (r) =
Ar , 0 ≤ r ≤ n,
0,
r > n.
(3.2)
Notice that we have applied the deterministic DTM of ri , 1 ≤ i ≤ n, is just 1, [9]. Thus
equation (3.1) writes
(k + 1)X̂(k + 1) =
n
X
Ar X̂(k − r) + B̂(k),
k = n, n + 1, . . . ,
(3.3)
r=0
for which the r. v’s X̂(0), . . . , X̂(n) are computed from (3.1). The initial value X̂(0) is
computed directly from the initial condition X0 . Note that from equations (3.1)–(3.3)
one can construct an approximation to the solution process of (1.1). In order to prove
the m. s. convergence of the inverse transform of X̂(k) given in (1.2), it is defined:
~
Z(k)
: = (Z1 (k), Z2 (k), . . . , Zn (k), Zn+1 (k))T
= (X̂(k − n), X̂(k − n + 1), . . . , X̂(k − 1), X̂(k))T .
(3.4)
Then, recurrence (3.3) is written as
~ + 1) = A(k)Z(k)
~
~
Z(k
+ H(k),
(3.5)
418
L. Villafuerte and J.–C. Cortés
where

0
0
..
.



A(k) =

 0
 A
n
k+1
1
0
..
.
0
1
..
.
0
0
..
.
...
...
...
0
0
..
.


0
0
..
.












~

,
H(k)
=
 .


 0 
0
0 ...
1 


 B̂(k) 
An−2
A0 
... ...
k+1
k + 1 (n+1)×(n+1)
k + 1 n+1
(3.6)
0
An−1
k+1
Iterations of (3.5) yield
~
Z(k)
=
k−1
Y
!
~
A(i) Z(n)
+
i=n
k−1
X
k−1
Y
r=n
i=r+1
!
~
A(i) H(r),
(3.7)
if m ≥ n0 ,
otherwise.
(3.8)
for k = n, n + 1, . . . where
(
A(m)A(m − 1) . . . A(n0 )
A(i) =
1
i=n0
m
Y
4
Mean Square Convergence
This section is addressed to prove that the inverse transform of X̂(k) given in (1.2) is an
m. s. solution of problem (1.1). This result is summarized as follows
Theorem 4.1. Consider problem (1.1) and let us assume that:
i) The polynomial coefficients Aj are r. v.’s satisfying the following condition:
m
∃ M, K > 0 : E |Am
< ∞,
∀m ≥ 0, j = 0, 1, . . . , n.
j | ≤ KM
ii) The initial condition X0 is a 16-r. v.
iii) The forcing term {B(t) : |t| < c} is a 16th mean analytic stochastic process
1
.
where c >
n+1
Then there exists an m. s. solution of the form
X(t) =
∞
X
k=0
where X̂(k) are defined by (3.3) .
X̂(k)tk ,
|t| <
1
,
n+1
(4.1)
419
Solving Random Differential Equations
Proof. Expression (3.7) will be used to show that
∞
X
k
~
kZ(k)k
4,v |t| is convergent for
k=0
|t| < d1 , where d1 is to be determined later. This fact will imply that
∞
X
X̂(k)tk is m. f.
k=0
~
convergent for |t| < d2 because kZ(k)k
4,v ≥ kX̂(k)k4 . Note that the m. f. convergence
of (4.1) is required because it was a !
condition for applying Theorem 2.2 to the
! equation
k−1
k−1
k−1
Y
X
Y
~ 1 (k) =
~
~ 2 (k) =
~
(1.1). Setting Z
A(i) Z(n)
and Z
A(i) H(r)
one
r=n
i=n
~
~ 1 (k) + Z
~ 2 (k). Then, we will prove that
gets: Z(k)
=Z
∞
X
i=r+1
~ i (k)k4,v |t|k are convergent
kZ
k=0
for i = 1, 2 to achieve our goal. Following previous notation, let us define
à :=
k−1
Y
A(i) = Ãr,s .
i=n
From (3.8) it follows that
Ãr,s =
n+1
X
Ar,s1 (k − 1)As1 ,s2 (k − 2)As2 ,s3 (k − 3) . . . Ask−n−1 ,s (n). (4.2)
s1 ,s2 ,...,sk−n−1 =1
Note that à is an n + 1 × n + 1-dimension random matrix. Now, using the norm for
(4.2) defined in (2.5) and inequality (2.3) yield
n+1
n+1
k−1
Y
X
X
Ar,s1 (k − 1)As1 ,s2 (k − 2) . . . As
≤
(n)
A(i)
,
s
k−n−1
2p
r,s=1 s1 ,s2 ,...,sk−n−1 =1
i=n
2p,m
1
1
k−n
n+1
k−n
X
k−n k−n . . . Ask−n−1 , s (n)
.
≤
(Ar,s1 (k − 1)) r,s,s1 ,s2 ,...,sk−n−1 =1
2p
2p
(4.3)
Note that any component Ar,s (l) of the matrix A(l) can be either 0, 1 or
j = 0, . . . , n and l = n, n + 1, . . . , k − 1. Then, one obtains

0



1

k−n
1
k−n 1
"
=
(Ar,s (l)) 2p(k−n) #! 2p(k−n)

2p
A

j


.
 E
l+1
Aj
with
l+1
or,
or,
(4.4)
420
L. Villafuerte and J.–C. Cortés
By i) and (4.4) it follows that
1
"
1
2p(k−n) #! 2p(k−n)
Aj
K 2p(k−n) M
E
.
≤
l+1
l+1
(4.5)
~
On the other hand, by (3.3)–(3.4) and observing that the terms involved in Z(n)
have
γn−1
γn−1
γ0
γ0
the form A0 . . . An−1 X0 or A0 . . . An−1 B̂(k) for k = 0, . . . , n − 1, γi nonnegative
integers
such
that 0 ≤ γ1 + . . . + γn−1 ≤ n, i) and ii) imply that there exists C(n) such
~ that Z(n)
≤ C(n) < ∞.
2p,v
~ 1 (k) and Z
~ 2 (k). To conduct the study, we distinguish
Now we analyze the bounds of Z
two cases.
1
k−1
Y
K 2p(k−n) M
Case I: If
≤ 1, for all l and k then A(i)
≤ (n + 1)k−n+1 . By
l+1
i=n
using (2.6) with p = 4 yields
k−1
Y
~
Z1 (k) ≤ A(i)
4,v
i=n
~ Z(n)
2p,m
≤ C(n)(n + 1)k−n+1 .
8,v
8,m
∞ X
~
Hence, in this case for each t, the series
Z1 (k)
k
|t| can be majorized by
2p,v
k=0
∞
X
αk1 ,
k=0
1
being αk1 = C(n)(n + 1)k−n+1 which is convergent for |t| <
as it can be checked
n+1
by D’Alembert’s
k−1
test.
Y
Also A(i)
≤ (n + 1)k−r and by Proposition 2.1 it follows that
i=r+1
2p,m
B̂(r) M1
~ =
,
(4.6)
H(r)
≤
r + 1
(r + 1)ρr
2p,v
2p
for 0 < ρ < c and M1 > 0. Thus,
∞ X
~
Z2 (k)
k=n+1
p,v
|t|k ≤ M1
∞
X
k−1
X
k=n+1
r=n
1
r+1
1
(n + 1)ρ
r !
(n + 1)k |t|k .
(4.7)
Using D’Alembert’s test, one proves that the series in the right-hand side of (4.7) is
1
1
convergent for |t| <
for all n ≥ 1 and c > ρ ≥
.
n
+
1
n
+
1
1
K 2p(k−n) M
Case II: If
≥ 1, for all l and k, then
l+1
k−1
Y
k−n
1 M
(n + 1)k−n+1
≤ n!K 2p
.
A(i)
k!
i=n
2p,m
421
Solving Random Differential Equations
~ On the other hand, we know that Z(n)
2p,v
∞ X
~
≤ C(n) and then
Z1 (k)
k=n+1
|t|k is
2p,v
majorized by
∞
X
1
C(n)n!K 2p
k=n+1
M k−n (n + 1)k−n+1 k
|t| ,
k!
which is convergent on the whole real line.
Taking into account that k − r − 1 < k − n because of r = n, n + 1, . . ., it follows
k−r−1
that
< 1, hence for K > 1 we have that
k−n
k−r−1
1
K 2p(k−n) < K 2p .
(4.8)
By
k−r
k−1
Y
A(i)
i=r+1
(n + 1)k−r (r + 1)!K 2p(k−n) M k−r−1
≤
k!
2p,m
and (4.6), (4.8), it follows for K > 1 that
∞
X
∞
X
k−1
X
!
(n + 1)k k−1 k
M |t| .
k!
r=n
k=n+1
k=n+1
(4.9)
It can be checked that the series in the right side of (4.9) is convergent for |t| < ρ. The
same remains true for K < 1. By doing an analogous study of combining Case I and
∞ X
~
Case II, it is proved that the radios of convergence of the series
Zi (k) |t|k ,
kZ2 (k)kp,v |t|k ≤ M1 K
1
2p
r!
((n + 1)ρM )r
k=0
2p,v
1
.†
n+1
In order to obtain approximations of the mean and standard deviation functions by
the DTM, we truncate the infinite sum given by (4.1):
i = 1, 2 are greater or equal to
XN (t) =
N
X
X̂(k)tk .
(4.10)
k=0
Thus,
N
h
i
X
E [XN (t)] =
E X̂(k) tk
(4.11)
k=0
N X
k−1
N
h
i
2 X
X
2
2k
E (XN (t)) =
E X̂(k)
t +2
E X̂(k)X̂(l) tk+l .
k=0
This completes the proof.
k=1 l=0
(4.12)
422
5
L. Villafuerte and J.–C. Cortés
Examples and Conclusions
In this section several illustrative examples in connection with the previous development
are provided. The mean and standard deviation of the truncated solution are computed
from (4.11)–(4.12).
Example 5.1. Let us consider equation (1.1) with P1 (t) = At, where A is a standard
normal r. v. (A ∼ N (0, 1)), B(t) = e−Bt , where B is an exponential r. v. with
parameter λ = 1 (B ∼ Exp[λ = 1]) and X0 is a Beta r. v. with parameters α = 2 and
β = 3, (X0 ∼ Be(α = 2; β = 3)). To simplify computations, in this example we will
assume that A, B and X0 are independent r. v.’s. In order that i), iii) of Theorem 4.1
are satisfied, we truncated the r. v.’s A and B. Therefore, for those r. v.’s we select L1 ,
L2 > 0 sufficiently large so as to get P (B ≥ L1 ), P (|A| ≥ L2 ) arbitrarily small, see the
truncated method in [6]. Let us say that Ā, B̄ are the corresponding truncated r. v.’s of
A and B, respectively. It is easy to show that B̃(t) := e−B̄t is 16-th mean analytic on
the whole real line and Ā satisfies i). Clearly, X0 is a 16-r. v. Hence, Theorem 4.1 holds
and X(t) given by (4.1) is an m. s. solution of (1.1). Table 5.1 contains the numerical
results of the mean E [XN (t)] and the standard deviation σ [XN (t)] using (4.11)–(4.12)
C
MC
with N = 20 as well as the mean µM
m (X(t)) and the standard deviation σm (X(t))
using Monte Carlo sampling with m = 100000 simulations. It was chosen L1 = 100 and
L2 = 8 that made P (B ≥ L1 ) = 3.72008 × 10−44 and P (|A| ≥ L2 ) = 1.22125 × 10−15 .
Notice that the approximations provided by the random DTM are in agreement with
Monte Carlo simulations.
t
0.00
0.10
0.20
0.30
0.40
0.50
C
µM
m (X(t))
m = 100000
0.249618
0.344950
0.432082
0.512552
0.587812
0.659367
E[XN (t)]
0.250000
0.345314
0.432392
0.512766
0.587884
0.659248
MC
σm
(X(t))
m = 100000
0.193389
0.193448
0.194243
0.197344
0.205102
0.220580
σ[XN (t)]
0.193649
0.193711
0.194511
0.197620
0.205386
0.220914
Table 5.1: Comparison of the expectation and standard function of Example 5.1 by using
expressions (4.11)–(4.12) with N = 20 and Monte Carlo method with m = 100000
simulations at some selected points of the interval 0 ≤ t ≤ 1/2.
Example 5.2. We now consider P1 (t) = At where A ∼ Be(α1 = 1, β1 = 2), B(t) = B
where B ∼ Gamma(α2 = 2, β2 = 3) and X0 ∼ Unif([10, 14]). Condition i) of
Theorem 4.1 is satisfied because A has codomain [0,1]. Clearly B is 16-th mean analytic
and X0 is a 16th r. v. therefore X(t) given by (4.1) is an m. s. solution of equation (1.1).
423
Solving Random Differential Equations
Using the same notation to the mean and standard deviation of Example 5.1, we show
the numerical results in Table 5.2. Note that Monte Carlo simulations are in agreement
with random DTM results.
t
0.00
0.10
0.20
0.30
0.40
0.50
C
µM
m (X(t))
m = 150000
11.99464
12.61696
13.28357
13.99951
14.770583
15.603558
E[XN (t)]
12.00000
12.62069
13.28575
14.00020
14.76986
15.60148
MC
σm
(X(t))
m = 150000
1.15609
1.20764
1.35536
1.57876
1.86085
2.19295
σ[XN (t)]
1.15469
1.20759
1.35620
1.58004
1.86218
2.19409
Table 5.2: Comparison of the expectation and standard function of Example 5.2 by using
expressions (4.11)–(4.12) with N = 10 and Monte Carlo method with m = 150000
simulations at some selected points of the interval 0 ≤ t ≤ 1/2.
Example 5.3. The differential equation Ẋ(t) = (γ cos t)X(t) is a mathematical model
for the population X(t) that undergoes yearly seasonal fluctuations [10]. In the deterministic scenario, γ is a positive parameter that represents the intensity of the fluctuations. However, in practice, it depends heavily on uncertain factors such as weather, genetics, prey and predator species and other characteristics of the surrounding medium.
This motivates the consideration of parameter γ as a r. v. rather than a deterministic
constant leading to a random differential equation. To apply Theorem 4.1, we use the
γ
1
approximations cos t ≈ 1 − t2 which leads to equation Ẋ(t) = (γ − t2 )X(t). Thus
2
2
γ
γ
P2 (t) = γ − t2 and so A0 = γ, A1 = 0, A2 = − . Assuming that γ ∼ Be(α = 2, β =
2
2
5) and X0 is a r. v. such that X0 ∼ Unif([110, 140]), in Table 5.3 we collect the numerical approximations for the expectation and the standard deviation to the solution s. p.
X(t) using both Monte Carlo with 106 simulations and random DTM with truncation
order N = 10. Again, we observe that results provided by both approaches agree.
We would like to point out that the main contribution of this paper is the rigorous construction of a mean square analytic solution of an important class of a timedependent random differential equation by the random differential transform method.
The approach allowed to provide reliable approximations for the first and second moments of the solution stochastic process.
424
L. Villafuerte and J.–C. Cortés
t
0.00
0.10
0.20
0.30
0.40
0.50
C
µM
m (X(t))
m = 100000
125.007
128.645
132.385
136.189
140.012
143.804
E[XN (t)]
125
128.6333
132.368
136.165
139.981
143.767
MC
σm
(X(t))
m = 100000
8.6580
9.1498
10.1174
11.4967
13.1918
15.1047
σ[XN (t)]
8.6602
9.1482
10.1079
11.4762
13.1584
15.0573
Table 5.3: Comparison of the expectation and standard function of Example 5.3 by using
expressions (4.11)–(4.12) with N = 10 and Monte Carlo method with m = 100000
simulations at some selected points of the interval 0 ≤ t ≤ 1/2.
Acknowledgements
This work has been partially supported by the Ministerio de Economı́a y Competitividad grants: MTM2009–08587, Universitat Politècnica de València grant: PAID06–11–
2070, PIEIC–UNACH, Mexican Promep.
References
[1] L. Arnold. Stochastic differential equations: theory and applications. Wiley Interscience [John Wiley & Sons], New York–London–Sydney, 1974. Translated from
the German.
[2] G. Calbo, J.-C. Cortés, L. Jódar, and L. Villafuerte. Solving the random Legendre differential equation: mean square power series solution and its statistical
functions. Comput. Math. Appl., 61(9):2782–2792, 2011.
[3] J.-C. Cortés, L. Jódar, R. Company, and L. Villafuerte. Solving Riccati
time-dependent models with random quadratic coefficients. Appl. Math. Lett.,
24(12):2193–2196, 2011.
[4] J.-C. Cortés, L. Jódar, M.-D. Rosello, and L. Villafuerte. Solving initial and
two-point boundary value linear random differential equations: a mean square approach. Appl. Math. Comput., 219(4):2204–2211, 2012.
[5] J.-C. Cortés, L. Jódar, and L. Villafuerte. Random linear-quadratic mathematical
models: computing explicit solutions and applications. Math. Comput. Simulation,
79(7):2076–2090, 2009.
[6] M. Loève. Probability theory I, volume 45 of Graduate Texts in Mathematics.
Springer–Verlag, New York–Heidelberg, 4th edition, 1977.
Solving Random Differential Equations
425
[7] T. T. Soong. Random differential equations in science and engineering, volume
103 of Mathematics in Science and Engineering. Academic Press [Harcourt Brace
Jovanovich, Publishers], New York–London, 1973.
[8] L. Villafuerte and B. M. Chen-Charpentier. A random differential transform
method: theory and applications. Appl. Math. Lett., 25(10):1490–1494, 2012.
[9] J. K. Zhou. Differential Transform Method and Its Applications for Electrical
Circuits. Huqzhong University Press, Wuhan, China, 1986. (in chinese).
[10] D. G. Zill. A First Course in Differential Equations with Modeling Applications.
Brooks/Cole, Cengage Learning, Boston, 10th edition, 2012.