Download Unlicensed-7-PDF665-668_engineering optimization

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
11.3
11.2.9
Stochastic Linear Programming
647
Central Limit Theorem
If X 1, X 2, . . . , X n are n mutually independent random variables with finite mean and
variance (they may follow different distributions), the sum
•
Sn =
n
Xi
(11.60)
i=1
tends to a normal variable if no single variable contributes significantly to the sum as
n tends to infinity. Because of this theorem, we can approximate most of the physical
phenomena as normal random variables. Physically, S n may represent, for example, the
tensile strength of a fiber-reinforced material, in which case the total tensile strength
is given by the sum of the tensile strengths of individual fibers. In this case the tensile strength of the material may be represented as a normally distributed random
variable.
11.3
STOCHASTIC LINEAR PROGRAMMING
A stochastic linear programming problem can be stated as follows:
Minimize f (X) = C T X =
•
n
c x
j
j=1
(11.61)
j
subject to
A iT X =
•
n
aij xj
i,
j=1
x
_b
i = 1, 2 . ., . , m
j = 1, 2 . ., . , n
j
(11.62)
(11.63)
where c j, a ij, and b i are random0,variables (the decision variables x j are assumed to
be deterministic for simplicity) with known probability distributions. Several methods
are available for solving the problem stated in Eqs. (11.61) to (11.63). We consider a
method known as the chance-constrained programming technique, in this section.
As the name indicates, the chance-constrained programming technique can be used
to solve problems involving chance constraints, that is, constraints having finite probability of being violated. This technique was originally developed by Charnes and Cooper
[11.5]. In this method the stochastic programming problem is stated as follows:
Minimize f (X) =
•
n
c x
j=1
j
(11.64)
j
subject to
P.
1
n
•
j=1
aij xj
_ bi2
p i,
x
0,
j
i = 1, 2 . ., . , m
j = 1, 2 . ., . , n
(11.65)
(11.66)
648
Stochastic Programming
where c j, a ij, and b i are random variables and p i are specified probabilities. Notice
that Eqs. (11.65) indicate that the ith constraint,
•
n
aij xj
_ bi
j=1
has to be satisfied with a probability of at least p i where 0 _ p i _ 1. For simplicity,
we assume that the design variables x j are deterministic and c j, a ij, and b i are random
variables. We shall further assume that all the random variables are normally distributed
with known mean and standard deviations.
Since c j are normally distributed random variables, the objective function f (X)
will also be a normally distributed random variable. The mean and variance of f are
given by
f =
•
n
c x
j
j=1
(11.67)
j
Var(f ) = X T VX
(11.68)
where cj is the mean value of c j and the matrix V is the covariance matrix of
defined as cj
- Var(c )
1
Cov(c 1, c 2) · · · Cov(c 1, c n)1
3Cov(c
3
• · · Cov(c 2, cn 44
2, c 1) Var(c 2)
V = 33.
4
)
...
4
2
Cov(c n, c 1) Cov(c n, c 2) · · · Var(c n)
with Var(
jc
(11.69)
) and Cov(c i, c j) denoting the variance of c j and covariance between ci
and c j, respectively. A new deterministic objective function for minimization can be
formulated as
F (X) = k 1f + k2
_ Var(f )
(11.70)
where k 1 and k 2 are nonnegative constants whose values indicate the relative importance
of f and standard deviation of f for minimization. Thus k 2 = 0 indicates that the
expected value of f is to be minimized without caring for the standard deviation of
f . On the other hand, if k 1 = 0, it indicates that we are interested in minimizing the
variability of f about its mean value without bothering about what happens to the mean
value of f . Similarly, if k 1 = k 2 = 1, it indicates that we are giving equal importance
to the minimization of the mean as well as the standard deviation of f . Notice that the
new objective function stated in Eq. (11.70) is a nonlinear function in X in view of the
expression for the variance of f .
The constraints of Eq. (11.65) can be expressed as
P [h i _ 0]
i = 1, 2, . . . , m
p i,
(11.71)
11.3
Stochastic Linear Programming
649
where h i is a new random variable defined as
•
hi =
n+1
n
aij xj Š b
=
j=1
_
i
q
k=1
y
ik
(11.72)
k
where
k = 1, 2, . . . , n qi,n+1 = bi
qik = a
ik,
k = 1, 2, . . . , n, y +1n = Š1
yk
= x k,
Notice that the constant y
is introduced for convenience. Since h i is given by
+1n
linear combination of the normally distributed
random variables q ik, it will also follow
a
normal distribution. The mean and the variance of h i are given by
n+1_
hi =
k=1
•
q ik yk
=
n
a ij xj Š
bi
j=1
(11.73)
Var(h i) = Y T V iY
(11.74)
where
- Var(q
3
3Cov(q
Vi = 3 3.
)
...
Cov(q
_ y1
_ y
_ 2
Y= _ .
.
.
y
_
Cov(q 1i, _ +1n
q
_
2i
1i)
2
i
5
_
_
__
6_
_
7
Var(q
•··
•q · ·
2i)
)1
Cov(q
Cov(q
i,n+
2i, 4
1i,
q
, q 1i)
Cov(q
This can be written more explicitly as
i,n+1
n+1
8
y
2
Var(h i) = _
k=1
n
=
•
i,n+1
k
2
, q 2i)
Var(q
i,n+
···
)1
9
ik) + _
l =k+1
y k y Cov(q
)l
il
ik, q
y k y Cov(q
)l
il
ik, q
•n
y
2
k
2
Var(q
9
ik) +
l =k+1
+ yn+12 Var(q i,n+
)1
+ 2yn+12 Cov(q
i,n+1
n
+
•
k=1
)1
4
4
4
2
n+1
Var(q
8
k=1
1
i,n+
)
, qi1
(11.75)
Cov(q
[2y y
k
n+1
i,n+1
ik,
q
)]
, qi,n+
)1
(11.76)
650
Stochastic Programming
=
•
n
n
8
x
2
k=1
2
k
Var(a
ik) +
9
•
l =k+1
+ Var(b i) Š 2
•n
k=1
x k x Cov(a
)l
x k Cov(a
b i)
il
ik, a
ik,
(11.77)
Thus the constraints in Eqs. (11.71) can be restated as
P
8 h Šh
i
_ i
Var(h i)
•_
9
Šhi
i
Var(h i)
i = 1, 2, . . . , m
(11.78)
p,
_
where [(h i Š h i)]/ Var(h i) represents a standard normal variable with a mean value
of zero and a variance of 1.
Thus if s i denotes the value of the standard normal variable at which
_(s i) = pi
(11.79)
the constraints of Eq. (11.78) can be stated as
/
_
_
0
Šhi
i
Var(h i)
i = 1, 2, . . . , m
(11.80)
_(s ),
These inequalities will be satisfied only if the following deterministic nonlinear inequalities are satisfied:
Šhi
_
i = 1, 2, . . . , m
i
Var(h i) s ,
or
hi + si _ Var(h i) _ 0,
i = 1, 2, . . . , m
(11.81)
Thus the stochastic linear programming problem of Eqs. (11.64) to (11.66) can be
stated as an equivalent deterministic nonlinear programming problem as
Minimize F (X) = k1
•
n
c j xj + k2_ X T VX,
k1
j=1
0,
0,
k2
subject to
hi + si _ Var(h i) _ 0,
x
j
i = 1, 2, . . . , m
j = 1, 2, . .
.,n
(11.82)
0,
A manufacturing firm produces two machine parts using lathes, milling
Example 11.5
machines, and grinding machines. If the machining times required, maximum times
available, and the unit profits are all assumed to be normally distributed random variables with the following data, find the number of parts to be manufactured per week
to maximize the profit. The constraints have to be satisfied with a probability of at
least 0.99.