Download Skibinsky, Morris; (1954).Some properties of a Bayes two-stage test for the mean." (Air Research and Dev. Command)

Document related concepts

Fundamental theorem of algebra wikipedia , lookup

Transcript
SOl'fJE PROPERTIES OF A BAYES TWO-STAGE TEST
FO,R THE MEAN
1
by
Morris Skibinsky
Institute of Statistics
tJniversity of North Carolina
Institute of Statistics
Mimeograph Series No. 107
June, 1954
1. This research was supported by the United States
Air Force, through the Office of Scientific Research of the
Air Research and Development.
11
"Beware the Ja.bberwock, my son 1
The jaws that bite, the claws that catchl
Beware the JubJub bird, a.nd shun
The frum10us Bandersnatchl"
He took his vorpal sword in hand;
Long time the manxome foe he sought-• • • • • • • • • • • • • • • • • • • •
One, two lOne, two 1 And through and' through
The vorpal blade went snicker-snackl
He lett 1t dead, and with its head
He went galumphing back.
.'Twas
. . brillig,
. . . . .and. .the. .slithy
. . . toves
.....
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe.
Jabberwocky
(Lewis Carroll)
111
ACKNOWLEDGMENT
This research was suggested by Professor Wassily Hoeffding,
to whom I am indebted for his careful and penetrating criticism,
and for his kind encouragement.
The financial assistance of the United States Air Force
is acknowledged, and sincerely appreciated.
Last, but certainly not least, I thank my wife, Phyllis,
without whose patience and understanding this work could not
have been completed.
Morris Skibinsky
iv
TABLE OF CONTENTS
Page
ACKNOHUDa1ENT
INTRODUCTION
iii
v
Chapter
1.
GENERAL PROPERTIES OF THE S3COND SAMPLE SIZE
FUNCTION IN THE NORMAL CASZ
Nature of the Equation Defining the Second
Sample Size Function
1.
2.
II.
An
Important Identity
ASYr~PTOTIC
1
23
PROPERTIES OF THE BAYES '!WO-STAGE
mST
3.
III.
Asymptotic Dxpression for the Second Sample
Size Function
33
4. Expansion of Second Sample Size Function
50
5.
Expected Value of Second Sample Size
56
6.
Error Probabilities
6,
7.
Comparison with One-8taee Test
81
8.
A
Trivial Asymptotic Solution
67
NON-ASlMPTOTIC CONSIDERATION OF BAYES TtJO-STAGE
TEST
9.
Further Properties of Seoond Sample Size
Function
90
10. Some Exploratory Computations in the
Symmetric Case
100
FIBLIOGRAPHY
123
v
A fairly general statement Qt Wald's decision problem
L-5J
1, for two stages may be exp-essed as follows.
Let X be
a random variable with frequenc," function, fg(x), where g may
be a real or vector valued parameter in some subset,.I'l , of the
real line or of a finite dimensional euc1id1an space.
Suppose we
are given a sequence of independent observations, x l ,x2 ' ••• , on
X.
On the basis of these observations we are to select one of a
finite number of possible alternative courses of action, AO,A l ,
••• ,Ak , which comprise a set A. of possible alternatives, by using
the following decision rule.
We are given numbers, a-m , m=l,2, ••• j functions, pv-m
(x ),
defined for all points
!m
in m-space, and for m=l,2, ••• ,
v=0,1,2, ••• ; and functions,8m+v(Ai'~V)' defined for all Ai € A ,
all points !m+v in (m+v)-space, and for m=l,2, ••• ,V=O,l,2, ••• ;
such that always
(0.1)
,
1. Numbers in square brackets refer to bibliography.
vi
and
-Rule
1.
Take m observations, with probability
2.
If the observed sample is x , take a second sample of v
~.
-m
observations, with probability pv(.!m).
3.
If the total observed sample is !m+v' accept alternative
The problem is to find sequences q, p, and 8, subject to
the indicated restrictions, Which are optimum in some sense.
In
particular, suppose we are given a loss function, W(O,A i ),
defined for all Oe Jl.. , and all Ai EA , non-negative and bounded,
representing the 10S8 incurred by accepting alternative Ai when
Q
is the true parameter; a c.d.f.,
~(Q),
defined over.fl-; and
suppose the cost per observation to be a constant, c.
We may
then seek those sequences, q, p, 8, for which the average
expected loss is minimum, i.e. a Bayes solution.
Using the above rule, the expected number of observations
required, given'O as the true parameter, 1s
vii
where EQ denotes the expectation ot a function of the observations
on X, when Q is the true
pa.r.~ter
value. The probability, given
0, that the rule will accept A1 1S
Hence, the risk or expected loss incurred by use of the rule,
given 0, is
00
(0.5) L Clm
m=l
and the average risk overJt is, after making the permissible
indicated interchange of integration and summation operations,
(0.6)
00
r. Clm
m=l
(
00
cm+ 1::
v=O
viii
where Xm+-v stands for the (m+-v)-dimens1onal observation space, and
where by
fg(~v)'
we denote the joint frequency function of the
first m+v observation on X.
The existance of a sequence, 8, which
for any fixed sequences, q, p, and fixed point, !m+v' in (m+-v)space, will minimize (0.6), is immediately apparent.
aj(!m+v)
Let.
f
= W(g,Aj)tQ(~+v)dX,
..n...
and suppose min(a ,a , ••. ) is unique, then such a sequence is
l 2
1
(0.8)
8
m+v
(A
x
j'-m+v
)
=
,
f
j=1,2, •••
o ,
Modifications of (0.8) for which the restriction of a unique
minimum may be removed are easily made.
To complete our Bayes solution, we need sequences, q and
p which will minimize (0.6) when the sequence 8 is as defined
by (0.8) or a suitable modification.
•
Let
then the average risk may be written
where, for m=1,2, ••• ,
f r:v+ .X(
f [A:.:(Q,A1l&m(A1'~)] tQ(~)dl-
Q,Ai lPQ(Ry(Ai '!m) )FQ(!m'dA, ""1,2, • • • .
At
.n...
(0.11) ---fJ. v (x
) ..
-m
1
,yeO
.J1.
and PQ(Rv(Ai'!m»
represents the conditional probability of
accepting A.,
given that the first sample is ~
x and that v
l
observations are taken in the second.
<
_
f
.1l-
Now
max W(O,A.)fO(x )0)",
eft
l
-m
A
i
so that~v(!m) is non-negative and has at least one absolute
minimum with respect to v, for
,
x
jr;:;,AJE A [W(O,Ai)-W(Q,A
( 0.13)
J) If'g(!m)d>..
v < -------.---------c
f
..n.
f'o(:sm)d.A
It cannot have an absolute minimum w.r.t. v, for v greater than
this number. For each!m, let V(!m) be a value of' v for which
~ v(!m)
is absolutely minimum.
One minimizing sequence, p, is
then seen to be
v
(0.14)
= vex-m)
v=O,l,2, •••
m=1,2, ••
If we use this sequence, the average risk may, by (0.10), be
written
(0.15)
where
( 0.16)
,
)-.1--
m
=cm+
m=1,2, ••.
xi
Suppose there exists a positive integer m = m*, say, such that
(0.17)
,(
=
rt'm*
min)
m
f
/"\m
'
then a sequence, q, for which the average risk is minimum is
,
(0.18)
~=
c
0
m = m*
,
m=1,2, ••• ,
,
and this in a formal sense would complete the Bayes solution.
Using this solution to our decision rule, gives us, by
( 0.19)
By (0.4), the probability, given Q, that the rule will accept
Ai 1s
(0.20)
The minimum average risk over..JL among all such rules is
\
xii
(0.21)
Consider now that the size of the first sample, m, is
given and suppose the set
A contains
only the two alternatives,
AO' Al • The above rule may then be simplified as follows.
Given functions, pv-m
(x ), ~:+
defined for v=O,l, •••
mv (x:+).
-mY'
and, respectively for all points
~m
in m-space and all points
!m+v in (m+v)-space, such that always
( 0.22)
1.
Take m observations
2.
If the observed sample is x , take a second sample of v
-m
observations, With probability pv(~m)'
3. If the total observed sample is !m+v' accept Al with
probability 'm+v(!m+v)' Accept AO With one minus this
probability.
The sequence, ., is obviously related to the sequence, 8, of the
more general rule.
Again, suppose we are given the non-negative
loss functions, Wi(O) = W(O,A i ), defined for i=O,l, and for all
o€J2,
representing the loss incurred by accepting Ai when 0 is
xiii
the true parameter; a c.d.t.,
~(Q),
detined over Jl. , and suppose
the cost per observation to be a constant, c.
The average risk, over.A , involved in the use ot this
rule is
00
(0.23) cm + Z
"=0
f
p.(!g,) [
1
Ccv+Wo(g)
1t g (!",...)
d),
*m+v
A sequence, ., which minimizes this, tor any fixed sequence, p,
is clearly seen to be
. (1' 11
W (g)tg(!",...)d),
o,
<
1Wo(g)tg(~.)d),
~
Let
•
where. is as defined by (0.24), and the relation ot this set
to (0.9) is obvious, then the average risk may be written
xiv
(0.26)
cm +
r
v~oPv (!m) .),z v (!m)
dx
-m
*m
where
[(cv+Wo(Q) I!-PQ(R,,(.!m) )1+ w1 (Q)PO(R,,(!m» JtO(!m)d>-,
.A
[ (\oJo(Q)
"=1,2, ••.
IT-t
(x )1+Wl(Q) t(x
- m-m-
) )fl'\(x )d>- ,
m.-m~-m
v=o
•
JL
Clearly this is a special case of (0.11), so that by (0.12), it
is non-negative and has at least one absolute minimum with respect
to " in the interval defined by (0.13).
It cannot have an
absolute minimum outside of this interval.
The sequence, p,
which minimizes (0.26) is Just (0.14) for our given value of m,
and this again in a formal sense, completes our Bayes solution.
Using this solution, the expected size of the second
sample becomes
( 0.28)
The probability, given Q, that the rule will accept
~
is
aJJ.4 tile a1l'1 iIwm ave:raa' :rUk 18 Juat (0-21
our liven
val~e
t
#
W1th Dl~
"flaeec1 by
ot m.
In tai8 paper, we e.re concerned wU_ a particularization
of tbe aener..l kyes prol)leo1,ltllned above.
In this , ..e, A
cOtl$lat. ot two point•.()Jl t.be real lU., say go and Ql' wtth
'0< Ql-
We pretar e.lterD4ttve
We e.re ,lven tbe to11O"UI
"0' w.en
ap:-lo~l
g
go' A1 # wb~
41atr1bution over
(0.;0)
Our
Cll
loss functions are
,
,
!, .. h'atl""
,Xm+\,)
j')..
Q
:II
-1-
xvi
and note that
(O.}4)
Let
W
g
),,=-
,
1
then the sequence of decision functions for which the average
risk is a mintmum 1s, by (0.24)
,
( 0.}6)
By (0.25)
I
r
)"
v >-}
rm
V=O,l""
xvii
so that by (0.27)
,
v=o
As justification for pursuing a Bayes approach. to this
problem, it may be noted that Wald and Wolfowitz in their paper
on the "Optimum Character of the Sequential Probability Ratio
Test ll
1:4 J,
proved, for the problem of deciding between two
simple alternatives, that for arbitrary apriori probabilities,
go' gl' and cost c, every sequential probability ratio test can
be regarded as a Bayes solution w.r.t. some values
W
o' W
l'
say,
of WO' Wl , and hence that
,
xviii
where 8 0 is any sequential ~obab1l1ty rat10 test for deciding
between two simple alternatives, 8 11 any other test for the
same purpose; Q1(SJ)' 1,3-0,1, 1& the probability, under Sj'
of rejecting Hi when it i8 true;
Efn
1s the expected number of
observations under Sjl when Hi is true (existence assumed).
From this it follows,
a~o8t ~ed1ately,
that
It will be shown that in certain special cases, similar
properties hold for the Bayes two-stage test.
CHAPTER I
GENERAL PROPERTIES OF 'IHE SECOND S.Al1PIE SIZE
FUNCTION IN mE l!Om1AL CASE
1.
Nature of the
D~!!~l:!Y~
Eq11.aM.on.
In the following sections, we consider the particular
case outlined in the latter part of our introduction, when
1
N
Z (x.-Q)
2 i=l 1
- -
2
I
We
m
=
m
I
Z Xi ' m - 1,2, ••• ,
i=l
that
s-
m+v
Let
•
let
(1.2) s
50
N = 1,2, •••
I
cS
m
+5
v
s
v
m.l.v
=Z
x , v • 1,2, ••• ,
i=m+1 i
2
(1.5)
i log ~,
t N .. sN - QN -
N· 1,2, ... ,
then by (0.33),
dt
m
r m .. 'I.e
A.
(1.6)
.L, ,... ,
m .. .., 2
,
and by (0.34),
r
Thus,by
,
v
rm+v
=-
rm
,
d(s' -
= e
v
-)
Q"
,
" .. 1,2, ...
•
(0.36)
1 , tm+v > 0 ,
(1.8)
t
(x
m+" -m+v
)..
f
" .. 0,1, •..
o,
~
Using (0.38), we have that the size of the second sample
for which the average risk is minimum, is given by the
integral value of " which minimizes (absolutely) the following
function of " •
.--J:1 (x
)
( 1 • 9) Gv (tm).. w g" f -m(x)
a1
Q
1
-m
00
e
-dt
m
+ ..-;;,,--
f
3
where
(1.10)
A(tm)
.. c(
~
+
~
-dt
e
m)
1
h't( v , t ) ..
m
(1.11)
~ dv 2 't
1
t "m
2
Now (1.9) can have an absolute minimum with respect to v
only for a value of v which satisfies the inequality
t
1
e
,
-dt
m
t
< 0
m-
,
->
•
Hence, an upper bound to any value of v for which (1.9) is
absolutely minimum is immediately seen to be
,
t
,
t
..
< 0
m-
>
m-
0
4
We shall, for convenience, in the following, drop the
m subscript from t m, and write some of the above functions
without their arguments, when this practice will cause no
confusion.
If we regard (1.9) as a continuous function of
\/, \/ any non-negative real number, we have
d
(1.12)
- - \/
2 p;i
1
- ~
e
1 +2
- ~h
,
We have, first of all, that
\I
~
0\1
G
V
'<2
>~
(t)
=0
[
22
(d t +1)
~ -l1.. m(t), say •
J
,
if and only if,
(1.16)
where
log v
= 2 log
TJ
1 2
2 1
(t) - 'lid \I - t \1-
..
ft(\I), say ,
(t)
1j
dZ
=--
[
~dt + W a _ ~dt]
\-1 e -
2 /2n.;.;.o
-1
-1
,
We nota here that in all- the work which follows, we assume
that d and Z are both positive numhers.
(1 19) lim
•
~
[ddV
G (0 ~
v ~
.. - 00,
lim [.
v->oo
~
(0)]
OV-V
... A(O)>O, 7/1(0)=0,
so that vnlen t=O, (1.15) has exactly one root in v.
This root
is positive ar..d is obviously the value of
. v for which Gv (0) is
an absolute minimum. On the other hand, l-7han t .; 0,
r
~~O L ~
011
(1.20)
v-----
\I
(t)lJ =
~~o· o· t.C,)v
r~-G v (t») . .
v--
A(t) > 0
•
ThUS, by (1.14), disregarding the case t .. 0, (1.15) has
2
(1. 21) 1 rts ~ in v, i'Jhen
I ..(
~
(t) . V=m
ClV v
o
<
<
>
>
t)
Now
1
(1.22)
log
In. (t)
2
2 2
... log 2' + log L(d t +1)
d
2'
- 17
-
6
is an increasing function ollt J, ~1ith unique minimum
t
= O.
It
->00,
as ~ ~
00
= -00
at
and has negatbre second derivative
for all t.
has a unique maximum at a value of t uhich is ; 0, according
as Wo
<
>
WI.
It tends to
-00,
second derivative for all t.
as t
+
-> -
00
and has negative
It folIous that there exist two
+
numbers, call them t- , both of which depend upon the parameters
d, Z, W, such that
t - < 0 < t+
(1.24)
and such that (1.15) has
< t+, t ~ 0
t = t or t + or 0
t<t - or>t+
t-< t
2
1 rts. in v, when
o
In the first case above, the two roots of (1.15) lie one
above, one below the inflection point v ... m(t ), so that by
(1.14), Gv (t) is relatively
maximur~
at the first, relatively
minimum at the second.
We have discussed the unique root of (1.15) when t • O.
+-
+
1rlhen t ... t -, the unique root, v
+
point of zero slope of Gv (t-).
.:: 0, v > O.
=>
7n (t -) ,
is an inflection
+
The slope of G (t-) is thus
v
7
When t < t- or > t+, the slope of G (t) is > 0, v ~ O.
v
It follows that when t < t - or > t + , the absolute minimum of
G (t) is at v
v
-
-
= O.
We define the f'lmction
larger root of (l.lS),
(1. 26)
v*(t)
=
o'lmique root of (1.15),
,
(
t-< t < t+, t , 0
t · 0 or t - or t +
t < t - or > t +
It' now for every t, we take vet) to be the value of v for which
G (t) is absolutely minimum (choosing the smallest number when
v
this value is not 1mique), we have clearly that
o
vet)
,
GO(t) ~ G *(t)
=
v
t
v*(t) , Go(t) > G *(t)
v
•
From the above discussion, it is apparent that
Ct': Go(t) > G *(t)} C
(t : t- < t < t+)
V
Recall from (1.15), (1.16) the significance of the equation
let us consider, in the following that
w
W
D
.
vto ,11,
8
then
.2-. f (v) .. d l ..We
dt
t
dt
I+Wedt
_ ~ = 0
v
if and only if
2t l+We
(1•.31)
v .. -d
1..\-1e
dt
dt -lJ:W(t)" say.
We consider" of course, only non-negative values of v" so that
when 11<1, the curve (1 . .31) is defined only on the interval
1
o ~ t < d1 log u.
In this interval it is convex and has positive
slope everywhere.
1rJhen W > 1" the curve (1 •.31) is defined only
on the ~terval ~ log ~ < t ~ O.
In this interval, it is also
convex, but now has negative slope everywhere.
In both cases,
we have
~
lim'r-
~r(O) = 0,
. 1
t-;.
1 vW(t) •
00
d log W
~I
1
I
l
II
1/:
1..--/
I
-or-----~A
0,
..!../!tt.J"!"
tA.
II \1\.'
(?f (;t)
Figure 1.1
"',
hi
>I
9
The role of the function"rW(t) is indicated more fully, by
the following analysis
W<l
,
> 0, All "
0 f (,,)
(1.33) dt
t
<
-> 0, ~trw(t),
< 0, All "
,
W>l
t<O
t ~
1
O~t<a:
1
log W
a:1
1
log
1
d log 'V1 < t
1
1
t -d
> - logW
1
w
~
t>O
•
Consider now the equation
(1.34)
vIhen W <
o~
1, both sides of (1.34) are defined only in the interval
1
t < CT
of t which
1
log~.
~ -00
The L. H. S. is a continuous increasing function
1
1
as t -> 0, and ~ +00 as t ~ d log
The
w.
R. H. S. is a continuous, concave, decreasing function of t,
equal to a constant, when t
~
0, and
~ -00
w.
1
1
as t ->d log
When vI > 1, both sides of (1.34) are defined only in the
interval
~ log ~ < t :: O. The L. H. S. is a continuous
decreasing function of t which
t
1
1
-.> (i 10g~.
~ -00
as t
0, and -.
+00
as
The R. H. S. is a continuous concave increasing
function of t, equal to a constant when t
t ~ ~ 10g~.
~
= 0,
and -.
-00
as
Thus in both cases, the solution to (1.34), call
it TW' is unique.
0
10
trJhen W :: 1, (1.30) holds true i f and only i f t
= O.
In
this case (1.33) may be written
~
dt
(1.35)
f t (v
) <
'>
0, all v, t
>
<
0
,
and we may appropriately define
(1.36)
Lemma 1.
(1.37)
is an increasing function of t , t - ~ t < Tw
has a unique maximum at t=TW,which is equal to lfW(TW)
v*(t) [
is a decreasing function of t
, TW< t
~
t
+
Proof.
It is obvious from the definition of TlrJ that
(1.38)
and that this relationship must also hold in the limit as
w..;..
1.
ThUS, to prove the lemma, we need only show that
(1) v*(t
f
~l-t
)
< v*(t), all t, tt such that t-
*
(2) v (t ) < v (t), all t, t
I.
la)
f
~ tt < t ~ TW
such that TW ~ t < t
t
~
t
+
\ve first suppose that W < 1
Let t be any number such that t- ~ t < TW' then by (1.33)
•
11
where
V
o is a number such that
1J~(TW)
(1.40)
( v,ow<t>,
> YO >
C
,
I
J}
t- < t < 0
-
-~))
........
I
\
I
I
/
f
I
f
I
I
-_._, - - T
1:
T-
rr,r)
w< I
....
---:P: "? ;t
I
~"l''dw
Figure 1.2
It fol101'1S, using (1.38), that
(1.41)
v*(T ) > v*(t) >
W
Ib)
V
o
Now let t be any number such that 0 < t < TW' t
such that t- < t' < t, then by (1.33)
(1.42)
where vI is a number such that
,
,
any number
o< t
,
t
< t
,
_
(1.43)
12
< t
< 0
-
By the second inequality in (1.40) and the second inequality
in (1.41), we have further, that
(1.44)
-~v
---f(J)
r
V,
I/'d
Figure 1.3
Thus, it follows that
*
*
\I (t) > \I (t I )
I
lc)
Finally, let t, t
_
be any two numbers such that t
,
<;
t
<;
:; 0, then by (1.33)
(1.46)
£ ,(\I)
t
<;
ft(v) , all \I > 0
,
from which it immediately follows that
Q.E.D. (1), W<; 1.
t
13
2a)
Let t be any number such that TW < t ~ t + , then by (1.33)
(1.48)
where v
2
is a number such that
1
[,riv(t) , TW < t <"d log
(1.49 )
[
00
1
w
11+
, d log W~
t ~ t
Figure 1.4
It follows, using (1.38), that
(1.,0)
2b)
v*(T ) > v*(t)
W
Now let t be any number such that TW < t <
number such that t < t
(1.,1)
I
+
:: t , then by (1.33)
1
d1
log W' t
1
any
where v is a number such that
3
~,.,
V
(1.52)
U wet)
<
v:3 <
(t ) , t < t
[
00
I
'1
1
< d log W
11'
d log W~ t
:= t +
By (1.50), (1.49),
•
_.- ,
Figure
f
()i)
Xl
1.5
Thus
*'
v*(t) > v (t )
(1.54)
2c)
FinallyI let t, t
< t' ~ t+ , then by
I
be any numbers such that ~ log ~ ~ t
(1.33)
from which it follows immediately that
*
v (t) > v*'
(t )
Q.E.D. (2), W< 1.
•
II.
Suppose now that W =- 1.
We have by (1.35) that if t, t
are any two numbers such that t
..
.:: t
I
I
< t :; 0, then
f lev) < ft(v), all v > 0
,
t
so that
v*(t) > v*(t t )
If t, t
I
are any two numbers such that 0
have the same result.
III.
•
~
t < t
I
~
+
t , we
Q.E.D. (1), (2), W=l.
The proof for W > 1 proceeds in a strictly analogous
manner to that given for W < 1.
Q.E.D.
Lemma 2.
,,*(t)
J.·s a con t inuous f unc tion 0 f t , t- < t < t+ •
..
Proof.
The lemma will be proved if for any gi-:-en e > 0 and any
arbitrary but fixed t in the indicated intervals, we can prove
the following four statements to be true
16
2
2
I
~ o<"'*(t r)
- '" * (t)<e
1) t~<rW
3 8>0, • 3 .,. o<t -t<8
Z) t-<t~l
j
3) T~t<t+
j 8>0, • 3 • O<t'-t<8 =} O<"'*(t') - \I*(t)<e
4) TW<t~+
3
l
8>0, • J • 0<t_t <5 ~ 0<\1 *(t) - \I * (t r )<e
I
8>0, " '3 • O<t-t <8
::;>
0<",
* (t) - \I * (t r )<e
We shall prove, below, only statements 1 and 2.
3 and
4 may be
Statements
proved in strictly analogous fashion.
Proof of Statement 1.
We are given that t is an arbitrary but fixed number in
the interval, t-~ t < T •
W
~l
Let
= log
(1 +
Me
'" ,(" (T~J)
)
Now
d ft(\I ) < ~
() log
_.d\l
0'"
\I > ",*(t) ,
\I ,
t -< t < t +
,
from which it follows that
(1.56)
*
ft(\I (t) ·e
~l )
*
< 1 og v (t) +
t'
"'1.
Since ft(v) is a continuous function of t, all t, all v > 0,
• 3 .,
2. We use the notation,
to mean "such that ".
3,
to mean "there exists";
•
17
we oan find a positive nU1llber, 6, which is
,
(1.57) o<t -t<6~
*
f
t
fev (t)e
~1
~
Tw - t and suoh that
.>'-
) < log v'(t)
+ l;
1
But by lemma 1, this impl1e-. that
(1.58)
*
*
. ( tt) <v (t)e 1;1
\/ *(t) -<y
,
which in turn implies that
Q.E.D.
Proof of statement 2.
We are given that t is an arbitrary fixed number in the
interval t - < t ~ T '
W
Let
(1.60)
•
(1.61)
then
(1.62)
Now
~
\/*(t)e 2 <
7n (t)
18
(1.63)
Hence, by the continuity of ft(v)" we can find a positive
number, 6" such that
o < t - t I < 6 =9 f
(1.64)
I
(m (t»
> log
t
m(t)
•
But by Lemma 1, this implies that
(1.65)
Ir~(t) <
/)'i,
v*(t I ) < v*(t)
Thus, by (1.62)
(1.66) 0 < v*(t) - v*(t I ) < v*(t)·(l-e -~2 ) ~ v*(Tw).(l-e -t 2 ) De.
If, on the other hand,
*
ft(v (t)e
~2
)
. {~
log v (t) - ~2
>
'
we can, by the continuity of ft(v), find a positive number,
6I, such that
(1.68)
I
I
O<t - t<6
=*
*(t).e~2 )
f
I(V
t
>1og v*(t) -~2 •
But by Lemma 1" this implies that
v*(t) e ~2 < v*(t I ) < v*(t)
,
and this leads to the conclusion (1.66).
Q.E.D.
19
Lenuna 3.
*
, t <t <
<
G
(1. 70)
v~l-( t)
(t) = Go(t)
>
t
t
= t··* or t i ,
,
t
,
t < t·* or t
i
where t;', tt are two numbers which are dependent upon the
parameters d, Z, W, and such that
(1.71)
t
_<t·<O<t·<t
*
i
+
Proof.
By reference to (1.9), we have that
I
(1.72)
,
t<o
,
t > 0
GO(t) =
Ie -dt
,
and that
-dt
+~
/2i
By Lemma 2, this is a continuous function of t, t-St$t+.
is easy to verify that
It
20
(1.14)
Now previous discussion, see (1.19) - (1.25), has shown that
Hence, by (1.12), (1.14), there exists in the open interval,
t-<t<O" a unique value of t, call it t." which is dependent
upon the parameters d, Z, W of (1.12) and (1.13) and such that
for all t in the interval
t-~tSO,
,
(1.16)
t
<
,
);
To complete the proof, we have by (1.12) that
t >0
•
(1.11)
t
<0
It is easy to verify that
(1.'78)
d [
dt
e
dt
1
Gv*(t){t~
>
0
By the discussion referred to above,
(1.19)
Hence, by (1.11)" (1.18), there exists in the open interval"
o<t<t+, a unique value of t, call it t*, which is dependent
21
upon the parameters, d, Z, Wof (1.72) and (1. 73), and such
that for all t in the interval
(1.80)
+
,
<ct~t
.>
t
Gv*(t)(t)
t*
<'
•
This completes the proof.
Note that the proof of the above lemma demonstra.tes the
+
existence of and uniquely defines t* , respectively, as the
positive and negative roots of
,
(1.81)
-
in the interval t - < t < t + •
-
Lemma 3 and (1.27) now give us
Theorem 1.
( v*(t)
v(t)
=(
0
,
t* < t < t*
,
t
< t*
or
> t*
,
where t*, t* are, respectively, the unique negative and positive
roots 1n t of (1.81) which are dependent upon the parameters d,
z,
W, and such that
t-
< t* < 0 < ti < t +
From Lemmas 1 and 2 and Theorem 1, we get
Theorem 2.
v(t) is a continuous function of t, t*
< t < ti .
22
If t* -< T.W -< t*
,
is an increasing function of t, t* < t <
vet)
[
has a unique maximum at t =
Tw
Tw'
is a decreasing function of t,
Tw < t < t*
•
If TW < t*
vet) is a decreasing function of t, t* < t
If Tw> t i
< t*
,
vet) is an increasing function of t, t*
< t < ti
•
The following interesting peculiarity of the second
sample size function, vet), is easily deducible from the above
results.
Theorem 3.
The second sample size function, vet), has discontinuities
at the points t t , t
(1.84)
*
yeti - 0) - yeti + 0)
v{t* + 0) - v(t* - 0)
> v (t+),
T.
> Y*(t-),
T.W > t*
> Y*(t-),
> v*(t+),
Tw
Recall that
(1.86)
then Theorem 3 implies
,
w-< t'*
~ t*
T.W < t*
,
23
By Theorem 1, we may now modify the statement of our
+
decision rule as follows. First, compute the numbers" t'i
(see (1.81».
1. Take m observations.
3.
If the observed sample is x , compute t (1.5) •
-m
m
a) If
t < t* , accept A '
mO
b) If
t m> t i , accept Al •
c) If t* < t < ti- , take v(t ) additional observations.
m
m
If (2c) occurs and the observed total sample is !m+v '
compute tm+ v
a)
b)
If t m+ v ~ 0, accept AO'
If tm+v > 0, accept Al •
Note that in general v(t } will not be integral, in which case
m
we shall approximate the test by taking the nearest integral value.
2.
An Important Identity.
In this section, two further lemmas are proved and an
important identity established.
These results then lead, in
the following two sections, to the derivation of an asymptotic
expansion for the second sample size function.
24
We define the function
o~
(2.1)
> 0,
Note that for every fixed v
~,
which intersects the lines
respectively.
~
~ ~
<~
> o.
(2.1) is a linear function of
21
= 0 and ~ = 1 at - 1
~ d and v '
It for any arbitrary but tixed value of
halt open interval, 0
v
1,
~
in the
1, we set
~
and solve for v, we get
Consider the non-negative v axis in the t, v plane,
(2.4)
t
= 0,
v
~
to correspond to the case,
(2.3), tor 0 <
~ ~
0
,
~
= O.
It then follows that
1, plus (2.4), tor
family with parameter,
~,
0
~ ~ ~
~
= 0,
represent a
1, the individual curves
of which (ignoring points at infinity) are loci of points in
the upper half t, v plane which satisfy (2.2).
In particular,
note that 7~(t), defined by (1.14), is the particular curve of
this family for which
~
= 1.
For reasons which will presently become apparent, we now
consider the solutions in t ot the equation
25
First, suppose that
~
half open interval 0
is an arbitrary but fixed number in the
<~$
1.
(2.6)
is an increasing function of Itl, with unique minimum
at t
t
= O.
It
->
00
as t
->
= -00
+ 00 and is concave for all
I: o.
is increasing to a unique maximum between t
and then decreasing. It tends to
-00
as t
=0
--->
and t
+
00
=~
log ~,
and is
concave for all t.
It follows that there exist two and only two values of t,
call them t- , t+ , which satisfy (2.5), which must in general
~
~
depend upon the parameters
(2.8)
~,
d, Z, W, and such that
26
~
When
= 0,
we consider equation {1.16} over (2.4).
case, obviouSly, t
= 0,
+
t~
and we define
=
Note that the numbers, t
In this
O.
-+ ,defined in the discussion above
+
(1.24), (1.25) are the particular cases t~ of the above
solutions to (2.5).
Lemma
4.
Let
~,
d, W be arbitrary but fixed numbers, 0 <
~
S 1,
d, W > 0, then t- (Z) is a continuous decreasing function of
Z,
t+
~
~
(Z), a continuous increasing function of Z. Furthermore,
+
lim
t~ (Z)
Z-> 0
=0
+
,
~ (Z) = +
lim
Z ->
00.
00
Proof.
If we denote the function (2.7) by FZ(t) to indicate its
dependence upon Z, we have for any positive 6, Z,
(2.11)
which quantity is independent of t.
(2.12)
lim
Z->o
Fz(t)
= -00,
Further
lim
Fz(t)
Z->oo
= 00,
Renee, by the above description of the function, log 71Z ~ (t) ,
all t.
27
which itself is independent of'Z, we have that t~ (Z) is a
decreasing, t + (Z), an increasing function of Z, and that
JI.
the limiting relations (2.10) hold.
I
-,//
_I
'"
_
- - - FO:.)
-
------
z
........
.ee.~~(;t)
/
~_L-l-._"-:'
. .--_+-_+-_-----:..I---+-:---\;-~\"!;t
£rZtA) X-(%.)
f.
/
t'-
Figure 2.1, W< 1
+
We shall here only prove continuity tor tJl.
(Z).
Continuity
for t- (Z) may be shown in strictly analogous fashion.
any E,"" Z
> 0,
For
let
Then, since Fz{t) is a continuous function of Z, all Z
all t, we can find a 8 = 8E,Z
But this implies that
>0
, such that
> 0,
28
Lemma 5.
Let d, Z, Wbe fixed positive numbers, then t- is a
1.1
+
continuous decreasing function of 1.1, t is a continuous
1.1
increasing function of 1.1, 0 ~ 1.1 ~ 1.
Proof.
+
By (2.5), the definition of t~ (see the discussion above
(2.8», and the definition of
Vt(t) (see (1.26), (1.16)), we
have the following identities in 1.1,
(2.16)
Thus, besides being respectively the unique positive and negative
•
+
~olutions in t to (2.5), t- may equally as well be considered,
1.1
respectively, as the unique positive and negative solutions in
t to
~l (t)
1.1
Now,
= Y*(t},
0
< 1.1 ~
1.
~ (t) is, for every fixed 1.1, 0
1.1
< 1.1 -< 1,
a con-
tinuous, increasing, convex function of Itl, with
(2.18)
/121.1(0)=0
For every fixed t " 0,
of 1.1.
?Jl 1.1 (t)
,O<I.1~l.
is a continuous decreasing function
v*(t), on the other hand, as described by Lemmas 1 and 2,
is positive for all t in the interval t l- ~ t ~ t +l ' continuous,
increasing to a unique max~um and then decreasing in this interval. Also it is obviously independent of 1.1.
v
",
.~
---
"
--,
--~-
-,,-'~'-'
29
- l l l (X.)
/
/
,
....
"
;:
Rence t
- _ . )l'(~)
--
_.
~~/
Ftpre
1:+
f4
2.a
.. is a decrea8in8 function ot
~
O<t"'<t«-~t
+ 1s an increasing
1.&, t
'1.&
~
function ot
j.1.
By (2.8), (2.9) this is tN.e tor all "'.. includifti
the lett endpoint, in the closed interval 0
We sbal1 FOve continuity only
similar argument holds
Let
'"
". -- --?J;f4/ 0>
~
interval.. 0
tor
S j.1~ 1.
tw.- tj.1+ •. An ess_tully
t; .
be any arbitrary but fixed number in the balt open
-
< I,L < 1.
Let
E
be any given positive number.
two
all inclusive but mutually exclusive possibilities may oc¢v.
First, suppose that
0< t
+
<
~-
£
then since t + is an inereasing function of "" we have that
I.l
Now, suppose that
then clearly.. we have that
•
Hence, since
7Jl j.1 (t)
i. a continuous decreas1na function
ot ...
30
for all t
f.
0, we can find a 6 .. BE
> 0, such that
---?!let:)
_._-
I"
/
---- - 7Jl!''' (;t)
But this implies that
(2.24)
Hence,
o < t +~
- t
+
~
t
<£
•
This proves the continuity of t + for 0 < Il
+
t~
~
to be continuous on the right at
(2.26)
lim
~-> 0
Hence, we can certainly find a
o < ~I
?lz
(t)
= O.
We now prove
We have for any t
= 00.
e = 8e > 0
< 8£ =>
- 0
~
~
-< 1.
/')"1. ~I(e)
such that
> V*{e)
But by (2.9) this implies that
+
o < till
-
to+ = t +lll -
0
< e.
Q.E.D.
f.
0
31
By Lemma 5 and (2.16) it now tollows that
lim
IJ.
->
+
1Jl
0
(t-) = "*(0)
IJ.
,
IJ.
+
and this in turn implies that t~ • 0 are the limiting solutions
to (2.17) as IJ.
---> o.
By (2.16), (2.29), tor arbitrary but fixed
IJ.,
0~
IJ. ~
1,
the point·
+
+
(t, v) = (~ , v*(t~) )
lies on a curve of the family (2.3), (2.4) and hence satisfies
"the relationship (2.2).
This gives us the following identities in IJ..
which may be written
,
Thus
t+=t l
>IJ.=II
t~ = t'
> IJ. = IJ.t ,
IJ.
t'"
On the other hand,
O<t'<t+
- 1
t"'t"
I
t-1 -< t' -< 0
say.
32
(2.34)
Ilt
= III ==> Mv*(t)(Il
I
)
I
~ ft(V)
:I
->
v=v*(t)
v*(t) ·?ltlll(t).
But, by the discussion above (2.17), this implies that
Ilt
= III ===>
+
t
= t~t'
0
~ 1.
< III
By (2.29), this holds, as well, in the ltmit as
III
---> O.
By Lemma. 2,
is a continuous function of t, t l(2.35), and by Lemma 5
S t S t +l
• By (2.32), (2.33),
t~
iS a decreasing function of t J
Ilt
t
has a unique minimum
=0
St <0
at Il = 0
is an increasing function of t,
0
< t ~ t~ •
f'
-=+-_._------~/<.
Figure 2.4
We thus arrive, finally, by (2.16), (2.29) at the important identity
v*(t) == 7J?
v*(o)
=
Ilt
(t)
lim
t->
0
J
?Jz
t~ ~ t ~ t~,
(t)
Ilt
t
I:
0
,
CHAPTER II
ABlMProTIC PROPERTIES OF BAYES TWO-STAGE TEST
3.
ASY!2totic Exiression for the Second Sample Size Function.
Equation (2.5) may be written in the form
where
~
is any fixed number in the halt open interval 0 <
If we now substitute into (3.1) the solutions t
+
= t-(Z),
~
~
5
1,
ve
set
the tollow1n& result,
where
(,.6)
I
+
+
Bote that E:~JJ.(Z) and hence E:~(Z) are positive decreas1ns
....
It"... (20) j tor
f'unct1oa. respectivel)" at
sufficiently larse values
+
at
It~ (Z) I.
Bence I by Lemma 4, they are posit1ve decreas 1n8
t\mctiODS ot Z tor sufficiently
larae
Z.
Also
35
Now we define
+
(3.9)
ei.O(z)
=
~ .:::
+
0 atll}
e~(t~(Z»
=
(3.10)
Clearly, then, the relationships (3.5) will hold for Jl=O.
By (1.26), (1.16), v*(O) is the unique solution in v to the
equation
log v = 2 log
/
dZ (~+~1)
--2./2i
-11
-
1 d 2v •
- '4
36
This may be written
Substituting into this equation its unique solution in
(3.12) 2 log Z
=
t
\I,
we have
'2$ O~o+!i1 )
2
d v*(0) + log v*(0) + 2 log
d
Now, as Z increases, the R.H.S. of (3.12) must, to maintain the
equality, also increase.
function of v*(0).
But the R.H.S. is an increasing
Hence \1*(0) is an increasing function of Z.
Further, we have that
lim
Z~
+
00
v*(0)
= 00
+
We can now see that €io(Z) and hence €~o(Z) are positive
decreasing functions of Z for sufficiently large Z and that
lim
\1*(0)
~ 00
o.
37
Hence, finally, for all
o<
- ~ -< 1,
~
in the closed interval
+
E-
2~
(Z) are positive decreasing functions of Z,
for sufficiently large Z, and
Z
lim
~ 00
+
E- (Z)
2~
=0
,
0 _< ~ _< 1
By (2.32) - (2.35), we have the identities
(3.16)
Thus, by (3.5), we have
where
o~
t
~ t~
(3.18)
E-
2 ~t
(Z),
t~ ~ t ~ 0
•
38
By some simple manipulation of (3.17), we get,for t~ ~ t ~ t~, t ~ 0,
The case, t=O, is excluded to avoid dividing by zero.
roots of this quadratic in
(3.20)
2(1-Pt(Z)10g Z
dlt
I
~t are
+
- 1 -
To choose between these roots, we observe that
that 0
~ ~t ~
1.
The
1.
~~
is real and
Hence the correct root must be so restricted.
First, if either root 1s to be real, we must have
2(1-Pt (Z) log Z
d
It I
- 1
>
1
-
or <-1
If the second of these two possibilities were true, both roots
would be negative.
It follows that the first inequality of
(3.21) must hold. However, in this case, the root (3.20) With
the positive second term would always be
satisfy our requirements.
~
1 and hence could not
There remains only the root with
negative second term, which by the first inequality (3.21) is
readily seen to satisfy them.
Thus,
By (2.36),
•
I!t
is continuous and equal to zero at t=O.
limit of the R.H.S. above as t
->
relationship holds in the limit as t
0 is O.
--->
The
Hence the above
O.
Now let
Ht) =
d\t I
log Z
~--:i!'-
Using (3.5), we have
o = ~(O)
~ t{t) ~ t{t~(Z»
Both upper bounds are by (3.14),
= l-€- (Z), t~5
21
<1
t
~ 0
for sufficiently large Z.
Substituting t(t) into (3.22), we have
40
and this relationship holds in the l1m.1t as t
rearrange the R.H.S. ot
~
O.
We now
0.25) tor greater convenience.
where, again this holds tor t~ in the closed interval
t 1-
~
t 5 t +1 and in the limit as t
--->
hgain, rearranging the R.H.S., we have
where
0, and
a.nd we define
Finally, we ca.n write
1
~2
_
t-
where
l.(l.~(t»~
+ 2Pxt(Z)
1+(l-Ht»~
.I
•
,
<
t l _t:5
t +l
'
42
and by (2.39), this relationship holds in the limit as t
By
(3.17),
4(1-Pt(Z) ) log Z
t
d(j.Lt + 1)2
By
---> o.
(3.31)
,
Thus we get
# o.
43
where
(3.37 )
p
4 (z)
P (z) + P (z) (2. + p (Z»
= ---.;t_ _-.::3;..;.t
,3t _
2
t .
(1+ P3t(Z»
Substituting (3.36) into 0.33)" we have
(3.38) v*(t) ...
1J2. (1- P (Z»log Z, ti ~ t ~ t~ ,
~d Il+(l-t;;(t»:2
6t
where
(3.40)
Using (3.14) together with the conclusion above it" we have
by (3.18) that
(3.41 )
lim P ()
Pt(Z) > 0, suff. large Z, Z~O
t Z
= 0,
t -l ~ t ~ t +
l '
44
and hence it can be shown in each case for t 1-
~
t
~
t 1+ '
Let
(3.43)
u(t)
=1
+ (1 -
~(t»i ,
then by (3.38)
0.44)
2 2( t ) ( 1v*(t) = ~u
d
P
6t
( Z» log Z,
t 1- ~ t ~ t + •
l
Substituting this expression into (1.11), we get
+
h-(v*(t),t)
u2(t)(1_ P6t (Z» ~
-1 dt~
>.
Q~
1
...
= --u(-t-)-(l--~P-(-Z-»)""'~-';;';--·(-~
log Z)2,
6t
Rearrangem.ent of the R. H. S. gives us
't'lhere
1
( (1~(t»2 ,
J(t)
=(
1
,
-
t<O
-
t > 0
45
vie shall use the following v1e11 known result.
e.g.,
see [2] •
00
(" -ix 2dx
(J.48)
j e
III
e
_
1
2 -1 N
"fJ y
z c .y"
jeO J
,I
y
2j
+ (-1)
N+ L
~N' N III 0,1,2, ••• ,
where y is a positive number,
,
j . 0,1, •.. ,
(J.49)
00
R = (2N+2)J
N 2N+1(N+1)!
( X..2(N+1) e"
ix
/
2
dX,
N
= 0,1, ...
y
and uhere
Lemma 6.
Let d, Wbe arbitrary but fixed positive numbers.
:fe(Z) ~! (1 - 8)10g Z ,
Let
,
46
then for any fixed positive 8
< 1, no matter how small, we
can by taking Z sufficiently large, obtain for all t in the
closed interval
the inequality
Proof.
By (1.72), the lemma will be proved, if we can show that
~
lim
Gv*(t)(t)
Z->oo
dt
11m
Z
->
e
0
,
-
~8 ~
0
~
0
,
.
,-~
0 ~ t ~ ,) 8
= 0,
Gv* ( t ) ( t )
t
By (1.74), (1.78), respectively, (3.54) and (3.55) will be
true, i f
lim
Z
->
G"
e
lim
Z
->
00
(-
v'(:1 6 )
00
dJ a
G
v.
,," (
J 8)
J 5)
(
= 0
'1 s:.)
= 0
v
We first note that for any fixed positive 8
small, we have by (3.5), (3.8), for
~
= 1,
< 1, no matter how
that
47
for suff. large Z.
Hence, by (3.42)
lim
Z
->
p
00
6,!. J e
(Z) mO.
We shall prove, here, only (3.56). (3.57) can be proved in
strictly analogous fashion.
By
(1.73)
00
+...!....
/2i
)
x2
.-i ax
+
e
$
+
h (v*(-J5)'-'J 8 )
By
00
d 18
/
e ¥2 dx
h- (\1*( -
r 5)'· r5)
(3.46)
= 12(1 -
where
~
(Z) log
5
zY
,
48
P
6,-1 8
1 - P
6,-1 8
and by (:3.59)
p (Z)
"
Z -> 00 8
lim
= o.
By (3.48) - (3.50), taking N = 0, Y = (3.61), we have that
J
00
.-.;x2dx
h-(l'*(-J
< ~ [n (l-~8 (Z)log
z]-l
-l+e
Z
(Z)
8
e),- '1 e)
On the other hand,
Hence
"
(3.66)
<
P (Z)-8
1
'"
1
"2 [n(l-p (Z)log z]·~. Z 8
•
e
But, by (3.63), the R.H.S. and hence the L.H.S.
Z
->
00.
By (3.46)
->
0 as
49
1
(1 - p
6,.
(2 log Zr~
r
1 i
8
•
---> 00, 3S Z ---> 00 and
(}.60) -> 0.. as Z -> 00.
Clearly, this
R.B.S. of
Finally, by
hence the second term
(3.44), (3.65), the f1rst term
R.B.S. of
(3.60) may be written
2
2
i
(1 + 8)
2
(1. P
6,. J 8
d
and this also
---> 0,
as Z --->
00
).
~~o
leg Z
2.
+
li1
log
6
zJ
Z
•
Q.E.D.
As a direct consequenee of Lemmas 3 and 6, we now have
the fol10\Oling.
Theorem. 4.
Let d, W be arbitrary but fixed positive numbers.
ii,
t*' be the two numbers defined by Lemma 3.
fixed positive 8
Let
Then for any
< 1, no ma.tter how small, we have, by taking
Z sufficiently large that
:r8(Z) <
ti(Z)
tt(Z) < t~(Z)
< t*'(Z) < -
,
J e(Z)
'
50
4.
Expansion of the Second Sample Size Function.
In this section, l'le expand the function, v*(t)
in terms of the parameter Z, to terms of order
(3.38),
0(10; Z);
our result holding for all t in the closed interval,
t
~
J 6(Z).
In the following pages, we shall for convenience,
and where no confusion is likely, make use of the abbreviated
notation indicated below.
(4.1)
p
= p (z)
,
p
t
~ .. ~ ( t) ..
(4.2)
(4.3)
vIe
i
~ .. (1 ... ~;)i ,
u.
d
=
(Z)
p
, i •
it
1,2, .••
•
It.I
log
Z
1+t
, w· 1 ... t
•
shall, :for the same reason, 't-lhere no confusion is likely,
drop the arguments from functions else't-mere introduced.
For reference to the original definitions of the infinitesimals
(4.1), i = 1, •.• ,6, see (3.18), (3.21) ... (3.40), in the
previous section.
By (3.18), (3.6),
51
where
(4.5)
By (3.7), (3.15), (3.16), (3.34), we have, for ti ~ t ~ t~ ,
(4.6)
so that by (4.4) JI
where
(4.8)
52
(4.9 )
= p
p
7
+ p
71
J
72
,
(4.10)
f(
2
b
log2z +
(l-p)
1
2)2 _log Z]
J
(4.12 )
We note, first of all, that for all Z > 0,
t- ~
1
t
~
t +l '
and that
(4.14)
lim
(Z)
p
Z~>oo
7t
~
0
,
•
Hence
k
p
k
=
-p
_ t
(Z)] = o(~), k ~ j+l,
logJZ
j =
1,2, ...•
and this holds for all t in the closed interval ti ~ t ~
t;: .
The results which follow depend upon (4.15) and are
obtained by straightforward algebra1Q methods.
presented without detailed derivation.
They are
(4.10) and (4.11)
can be put in the following form
(4.16)
Adding these together and using
(3.31), we get
(4.8) can be put in similar fom.
(4.19)
13'" log v - ~ - ~+t P
~u
U
+0(101 z)
,
n
.!.
g
It I ~
l'mere
(4.20)
v ... v (Z) ...
t
~
d
.u.(W(t»2
•
J6
J
$4
Hote that here t'le find the remainder to be of the indicated
order only for t in ~he closed interval,
J 5 is defined by (3.$1)
and again 5
~
t
J 5' where
~y be any positive
fixed number < 1, no matter how sHall.
Substituting (4.18),
(4.19) into (4.7), we find after one iteration that for It'
(4.21)
P =
lb~ Z (lOg V - ~)
_~l_
10g'2 Z
(22 log V +
u
w2_~t~
4u
!
where
(4.22)
•
After, progressively expressing P to P in terms of P, we
l
5
arrive at the follo''1ing result for P6' valid for It I $
(4.23)
P
6
=
1
2
u 10g Z
+
",2 2
r1 P + ~
P 4t
1
4
2
2u log Z
+
J e.
o( log12~.
i.)
Substituting into this our derived expression (4.21) for P,
we get, again for It I ~
Je '
55
(~
V
(4 .24) P6 -_ t log
log Z
log
V-
~
+
0 (
lO~2z)
Finally, we have from (3.44) that
(4.25)
v* (t) =
2
'2
d
U
2Lrlog Z - P log Z
6
-
7, t- < t
< t +1 '
l --
so that the desired expression for this function in the
interval I tl
(4.26)
V*(t)
S 15
is
= 22u2 ~Og
d
~
Z -
f- log
V _
2
10g
V
2t log Z
(~
log V -
~]
,
S6
5. Expected Value of the Second Sample Size Function.
By our original assumption (1.1) and by (1.5) the
frequency function of t
= t m is
,
where
~d
q
We
-
~ log
A ,
-imd -
~ log
A ,
=
Q
{
will need the following
Lemma 7.
0;0
Pg(t)dt
r
i~ 2
< _/2i t3 _ z ..
1-
log Z
log Z _
-00
where
dqQ
(
Proof.
)
1 .. 8! log Z
1
'
00
fJ8
By (3.48) - (3.50), we have, taking N=O, and recalling the definition
18 ,
(.~.51) of
(5.6)
.
d
(:
~1-8!
16~ z
-
_.(2i. t3!
~~
Z
1
Ei:.
dO)
- ~ (1-8+e 2d m
·dqQ
1
og
2
2
Z) log Z ::
log Z
-1
log Z
-
]
log Z
Q.E.D.
As a result of the above lemma, we have, certainly that
-18
(5.1)
f
Pg(t)dt
,
-00
all finite positive j.
8
we define the notation, EQ , by the identity
58
J
(5.8)
a
J ')(
8
Eg /((t) "
- j
(t) Pg(t) dt
8
By Theorem 1,
,
where
(5.l0)
By Theorem 4, Lemma 1, we have, for sufficiently large Z,
00
0< "'1 < max Lv*(-J a),v*('J a}_7·
By (3.38)
so that, by (3.59), (5.7),
f
1a
Pg(t)dt +
59
Renee,
(5.14)
For the remainder of this section, all relationships
in t should be understood to hold for all t in the closed interval
It I ~
:re•
By (4.26),
+ 2
l~g Z
5
Eg (;: log
v) - n~g z
+
5
Eg (
~~ lOg~
(10~ z)
0
First, we note that
(5.16) ~j+l = I d )
j+l
\iog Z
• ItlJ+1
=o(lcg1..Z) .
Itl j +l
J
j
= 1,
,
2, •• 0,
We shall now consider, one at a time, the terms on the R.R.S.
of (5.15).
Using (5.16) we have that
~
so
=
(1 _
~)~
= 1 _
~~
_ §£2 _
0 (
that
(5.18)
u2 = 2 -
~ + 2t
=4 -
1 ~2
2~ - 4
-
0
1
).
2
log Z
t
12 ) •
log Z
It 13
,
,3
,
It
60
Hence,
828
6
U == 4 EO (l) - 2 EO
Eg
~
...
~
182
'4 Eg
+
0 (
.
)
1
2
log Z
With the help of Lemma. 7, we have, atter some integration, that
E 6
Q
_
t 10
~- logZ
+
0
(
1 )
ZloSZ
where
1
de -2m
dQ.g
+ -
.&
Thus,
We may now consider the second term R.K.S. of (5.15).
verified that
2
2
~==4+.J;
tu
Thus,
by
(4.22), (4.20),
It is easily
61
(5.26)
u
2
r
log V
= 4 log
V + .!2L.v + 'log log Z d2 t 2
t
= 4 log V +
0 (
u 2 10g2 Z
10~ zJ
·
t
2
•
By (4.22), (4.20),
(5.27)
log V
=i
log log Z + log 4
q=
+ log (W (t»' + log u.
d
Now, u
=1
(5.28)
+
t
u
=2
Hence, using
may be written
2u
= log
2 -
t) 1 -
~ ~ + 0 (10~ z).
~2 2~
(4-t)u
t
2
(5.26)
r
2
(5.,0)
P
= 2(1 - 41
(5.16), it is easy to show that
log u
Thus, by
1
1 t2
- 2~
- ~
log V = 2 log log Z + 4 log 8
so that we have
q=
d
+ 4 log ('tif (t»'
•
62
where
Now consider the third term R.H.S. (5.15).
We have
,
so that
(5 .~~4)
1
2 log Z
=
• u
2
~
log log
log Z
1
og
~+
V
2 log V
= log Z
2
(lOg
log Z \~
§..E
d2
+ log ('Af(t»i)
)
•
Hence,
(5. 35)
1
2 log Z
E 8(
U
2
9 \ t2
vr\
log )
= log log Z +
log Z
2
(1 8 IiC + \
log Z '\ og d 2
R29)
By (5.16), it is clear that the fourth term R.B.S. (5.15) is
0tlo~ z)'
63
Collecting terms (5.24), (5.31), (5.35), we have, consulting (5.14), (5.15),
(5.36) EgV(t )
8
='2
log
d
Z
/log Z
~
2 log log Z
- "2 '4Q + ~2~";;;;;':1iL.,;;,
d
d log Z
+
(
0
1
where
(5.37)
15Q = 414Q - d
(5.38)
1 49
J
1Q
and
12Q
=2
log
2
2
(m + ~)
8€
2
+
13Q
,
'
d
are defined by (5.22) and (5.32), respectively.
With respect to our apriori distribution (0.15), the
unconditional expected value of the second sample size function
is
(5.40)
Ev(t) =
~
d
where we take
log
Z
Ilog Z
_
~
d
)
\log Z
£4 + 2 ~Og log Z +
£5
2
d log Z
2d l0g Z
'
64
j • 1, 2, •••
Note that
n
~4
= 21og
8FC+s
~
~3
d
and, by (5.2),
The difference between the expected values of the second
sample size function at Q and at QO is
l
+
W1
Wo
This difference is, of course, equal to zero, when --
gl
=-go = 1.
W
l
When the ratio, W = w- is close to one, it may be con-
o
venient to use bounds for 1. 2Q • By (5.32), (3.4), we have that
65
6. Error Probabilities.
The probability that our Bayes decision rule accepts
alternative Al when the true parameter value is Q is given by
(6.1)
This may be written in the form
where, as before, we write t for t •
m
By Theorem 1, (1.82), the second term in (6.2) may be written
t
it
i
00
J PQ{t) EQ[t..{t) I t } t +
-00
Pg{t) EQ[·m{t)
t
I t Jat.
i
By (1.8), this gives us
(6.4)
'*
f
t
-00
00
Pg{t)Pg(t
> Olt)dt +
[
t
t
f
00
Pg(t)Pg{t> Olt)dt
=
t
t
Pg{t) dt
•
66
The first term R.B.S. (6.2) may be written
f
t
tt
Pg(t) Egr'm+V(t)(tm+V(t))
if
I t ]dt
-
1,-
t'"
·f
t
( 6.5)
Pg(t) Pg(tm+v(t)
>0
It) dt
if
t.
.t"
=
f Pg(t)Pg(~v(t)
t
> Qv(t)
• t i t ) dt
J
'*
where -Q is defined by (1.4) and ASv is normally distributed with
A
mean Qv(t) and variance vet). Note that Sv is a generalization
of
s~
It
follows that
(6.6)
(1.2), for which v is not restricted to integral values.
PQ
(~V(t)
> ov(t)
- tit)
00
1
=-
f
hQ(t)
where
,
1
(6.7) h (t)
t
r.:r;:"\
- ~ d ,v\t, -
= (Q-Q)v(t)-t =
fV[tJ
= -h+ (vet),
t), Q=Q1
rvm
Q
Thus, the first term R.H.S. (6.2) may be put in the form
(6.8)
~
r f
t
1
-(2i
Pg(t)
t
oo _ ~2
e 2 dx dt
i'
Hence,
+
t~l-
00
(6.9)
QQ
=
j
t
Now,
by
PQ(t)dt +
i
-%C
1
r
t
i'
00
1
Pg(t)
J
(6.11)
'iX
dx
dt.
hQ(t)
(3.46) and Theorem 1,
t
where
e-
~
i'
<t <t
i-
,
,
68
By (4.23)" we have
(6.12)
p
8
= log I
log
_logt Vl
QS2z
2
+
0 (
1 _ ),
2
log-Z
By (6.9), QQ may be written in the following form
1
Q aQj2i
(6.13)
f
15
-1 5
Pg(t)
00
f
e-ix2 dx +
IQ(Z)
hQ(t)
where
By Theorem 4,
-15
f
(6.15)
Pg(t) dt
-00
7, the R.H.S. of the above inequality is of order less
By Lemma
than
1
Z log Z
•
,
Hence
(6.16)
By (6.13), (6.7), (6.10), (3.47)
00
0
r J
(6.17) Q" =10 I2i
PQO(t)
..18
._p2 dx dt
!2(1-p )l9g Z
8
1
+-!...
~
f
00
f
Po (t)
0
0
/ 2(t
_ix
e
2
(l
dxc1t + 0 Z log Z
1•
2
-p 8)10gZ
For the remainder of this section, all relationships in
t should be understood to hold tor all t in the closed interval
It I ~
18
•
By (3.48) - (3.50), we have, taking N = 1,
00
(6.18)
f
1
2
e-2X dx
- (l-j:) )log Z
=
e
8
1.r-2(1_P )log Z 7- 1
L
8' 1.
(2(1-p )log Z)2
8
.J2{ l-p )log Z
8
+ Rl '
10
~
o < R1 < 3 ~2(1-P
) log
8
Z-l
-
-(l-p )log Z
8
&.
2
e
By (6.12)
( 6 .20 )
- ( 1-p8 ) log Z
log V
(1
')
log Z + log V - 2t log Z + a log Z 'J
=-
so that
-(looP )log Z
(6.21)
8
e
c
Ye-
log V
(1)
2t10gZ + 0 log Z
Z
_ v 110g Z (
-
Z
log V ~
(1)
1 - 2tlog z) + 0 Z log Z •
Thus by (6.19)
(6.22)
Again, using (6.12), we have that
(1
-
p8
)-! = 1 +
log V +
2 log Z
(
0
1
)
log Z '
(6.23)
/2(1-
-
Hence
p
8
-1
1
7 ... 2 1og
-.
)log Z
.
1
~ + o{~)
.Log (.,
'
71
I
00
(6.24)
2
.-ix
dx
=; .
Z~ -2~~/i)r /~~gV~(l - 2l~g ~
+
12(l-p )log Z
8
If we multiply out and substitute for v its definition (4.20),
the R.R.S. above may be written
Using (5.16), we find that
(6.26)
u
= 2 _ ~~
+
0 (
1
). t
log Z
2
'
$-12LY_
2 ~ log Z -
0 (
1 ).
log Z
Thus we have
!
2
00
.-ix
dx
d~ (W(t»~ (2 -~t - lO~ z)
=
12( I-P )log Z
8
+
0
(z ~g z).
(1 +
It I + t 2 )
Now consider the inner integral of the second term R.R.S. (6.14).
By (3.48) - (3.50), taking N = 1,
It I
72
1 - ~2(~2_p )log Z
8
-
7- 1
_---:::--._--:~:__..,.--
(2(t 2_P ) log Z)~
8
o < El' < 3 ~2(~
2
_(t 2 _p)
5
-p ) log Z
8
-
7
- -
log Z
8
2 e
By (6.12)
( 6 .30)
- (t
2
- p ) log Z = - log Z
8
+
d
It I +
log V -
2
tOg V
log Z
+o( log1 Z ) '
so that
-(t
2
-p )log
(6.31) e
8
Z=Y
log
edltl e-
Z
= v
V (1)Z
2~10g Z +
/lO~
edit I (1 _ log V )
Z
2 log Z
~
.. 01. ~ 1 :\. e d
\Z log ZI
Thus by (6.29)
log
0
It I
+ RI
1 '
73
RI
_
1 -
(
0
1
Z log Z
).
e
d
It I
By (6.12)
(6.33)
(t
2- p ) -~ = t (1 + ~Og V).) + 0
8
2t log Z
(1 1z)
I
og
Hence
+
0
\Z[
1
log Z
)
• e
d
It I
•
If we multiply out and substitute for v its definition (4.20),
the R.H.S. above may be written
+
(
1
). d It I
o Z log Z
e
74
Using
(5.16),
we tind tbat
,
(6.36)
,
llis....!..-
~,
= 0 I(logZ
1
).
It I
Thus we have
f
00
(6.38)
-
1
~
+
It is easy to verity
e-!x
'C''Z
0
2
dx
*"
=
(V(t)t
edit! (2 + ~t -
lo~ z)
1 ~ ) • (1 + It.I + t2).adltl
to€,
y.
by (5.1), (5.2), that
edt Pg
o
(t). ~1 Pg ()
t
1
Hence, recalling the definition ot ~(t) (3.4), we have by (6.17),
(6.27), (6.}8)
75
PQ (t)dt
o
+
Now by Lemma
1
d Z log Z
t
PQ
7
J6
f
PQ(t)dt = 1 +
oil. l~g z)
,
-16
(6.41)
16
f
-16
Hence
(t)dt
o
t PQ(t)dt
= qQ + 0
('~)
,
-to
(6.42)
By (0.3 0 ), (0.35), (1.18), (5.2)
so that
By (6.44), we may write this
11
By (6.13), (6.16)
(6.41)
Q
Q1
=
10
f
Pg (t)
1
1
1--
flIji
-J O
By (6.41), (6.1), (6.10), this may be written
o
f
00
I
Pg1(t)
-1e
1
e'"2X
2
dx
dt
00
-1
:P
(t)
Q
1
o
f
12
e-'2X dxdt +
{2( I-p ) log Z
8
By (6.21), (6.38) and Lemma 1, this becomes
Finally, by (6.43), (6.44)
0
~l~gZ
}.
78
By (6.46), (6.51), we have for large Z, taking only the
leading terms of the expansions,
....
=
~l
~~-~
•
2
gff
d
Now regard d, g to be arbitrary, fixed, positive, and choose W
and Z to be, respectively, the following functions of d, g:
4
,
1
d2gcPO
°1
(6.53) WI (d,S) = -
°0
g
,
,
Z'(d,g) =
4
2
d glal
where
°0 ,
°0
g<-0
(Xl are small given positive numbers.
>
If
°0 ,
01 are
sufficiently small, and W, Z chosen as above, then we Mve,
approximately
79
,
(6.54)
Let W0' Wi be any values of
W0'
•
Wl' and c I any value of c, such
that
W'1
-W' = W'
o
,
Since our two-stage test is a Bayes solution, (consider now the
test with parameters, d, 8, W', Z·), its average risk is minimum,
i.e .,
where Ein represents the expected number of observations, at Qi'
of any other two-stage test with first sample of size m, and
its probability of rejecting Qi' when true, 1
= 0,
1.
It follows
that if
i = 0, 1
then
1
(6.58)
m + I. g1 Eg v(t)
1=0
i
<
1
L
i=O
g1 Ei n
ai,
•
80
Now, by (5.36), for small values of the error probabilities, a.,
1.
i.e., for large values of Z'
,
Hence, since go' gl are arbitrary, we have asymptotically for
small a i '
(6.60)
,
1 = 0, 1
•
Thus, asymptotically, for small values of the error probabilities,
the Wald property mentioned in our introduction, is seen to hold
for Bayes two-stage tests.
81
7.
Comparison with ane-Itage Bayes Solution.
In this section, we shall contpare one and two-stage Bayes
solutions to our problem, for large values of Z, in terms of
expected sample size; requiring of Ule one-stage solution, that
its error probabilities be the same as those for the two-stage
procedure.
If we take a single sample of size m + v, the Bayes solution
is given by the decision function
t
l
(7.1)
'm + ; (tm + ;)
=f
m+
...
\I
>
0
,
o
<
The probability that we shall accept alternative A , given that
l
the true parameter value is 0, is
00
(7 • 2) Q
,9
= Pg (tm+v... >
0)
=
1
{2n(m+~')
J
o
where
<ix,
82
1 -
~ ~
- =E
q
.Q
t
-
Q m+ v
d ..
a1 log A
, 9 •
91
,
•
1 ..
~
n d -
1
d log
A
,
e ""
o
Q
-
n =m+v
This may be written
00
..
1
~x
e
2
dx,
y.~
where
1
(7.6)
1
1
i ~
1
-- ~
y. = -(-1)
dn + - log A·n
~
2
.
d
.
,
i=O,l.
i=O,l,
83
...
To determine what value of n is required so that the error
probabilities (7.5) associated with this one-stage procedure will
be equal to the error probabilities Qg , 9 of our two-stage
o 91
Bayes solution, we set
00
1 2
-I'm
1
e
"''2 x
dx
:02
Q
9
0
This gives us
1
(7.8)
YO
1
=~
~
dn
.
+
1
d log
1
...'" '2'
~'n
.
= X(QeO )
=X
where we define the function X(a), by the identity
00
-I2i
1
X(a)
0
,say,
84
Solving for (7.8) for ~, we get
(7.10)
If we expand the radical on the right hand side above, we get
- = ~(Xo
4 2 - log A) - O(X-2 )
o
d
(7.11)
n
.
Some investigation shows that for positive x
(7.12)
X
. 2(-1)
x = 2 log x - log log x - log 4n
+ 0 (log
i
loS x)
og x
By (6.46),
(7.13)
so that setting
,
85
,
we get
2
2 log x = 2 log Z + 2 log
(7.16)
log log x
= log
d go
1
4 E + o(log z)
l
log Z + O(10~ Z)
Substituting into (7.12), we get
X2 = 2 log Z
0
jiog Z
+
2
2 log
go
8 !!l
d
rn
+
O(log 10~ ~)
log
Substituting this, in turn, into (7.11), we get, recalling the
definition (0.35) of A,
86
If now, on the other hand, we set
00
-/2n
1
12
e- , X
1
,
dx .. Q
~l
we get
1
(7.20)
-Yl ..
1
~
~
I
I
-dn - a log l·n
~
=0
Q~
X(l -
)
I
-
Solving for nand proceding as above, we arrive at exactly the
same result, (7.18).
A comparison of the leading terms on the right hand sides
of ('1.18) and (5.36) now clearly shows that the average number
of observations required by the two-stage procedure is asymptotically
equal, for large Z, to the number of observations required by the
above one-stage plan, i.e.
(7.21)
lim
Z ->00
m + Eev(t)
=0
-
n
1
87
Now the one-stage plan is a degenerate two-stage procedure of the
type (with first sample of given size m) we have been considering,
-
which has its second sample of constant size, v, for all possible
observations in the first sample.
since it
minimi~es
Our two-stage Bayes solution,
the average risk With respect to the class
considered, will require not more observations on the average,
than any such one-stage plan With the same error probabilities.
The result (7.21) indicates, however, that the larger the value
of Z, the slighter will be any
improveme~t
over the one-stage
set up that results.
A similar comparison with the Sequential Probability ratio
test, using Vald's approximation to the expected value shows that
the ratio of expected values, two-stage over sequential probability
ratio, approaches
8.
4,
in the limit as Z --->
00.
A Trivial Asymptotic Solution.
We have seen, by the results of section 7, that the two-
stage Bayes solution to our problem is, for large values of the
parameter, Z, little better, in terms of the average number of
observations required, than a one-stage Bayes solution with the
same power.
We now examine the possibility of an asymptotic
solution to our problem when d
= Q1
- eO becomes small, other
88
parameters remaining fixed.
That this leads to a trivial result,
is indicated by the following
Lemma 7
Let Z, W be arbitrary, fixed,positive numbers, then
lim
d ->0 vet) == 0
(8.1)
,
all t.
Proof
By (1.22)
lim
~
2
d _> 0 log fll(t) = log t
(8.2)
which is an increasing function of
= -00
at
It I
(8.3)
= O.
It I
,
with unique minimum
By (1. 2), (1.17),
d
I'
~> 0 f t ()7?(t»
= -00
,
all t
.
+
It now follows from the definition of t- (see discussion above
(1. 24», that
89
+
t- "" 0
d->O
lim
(8.4)
•
Thus, by theorem 1,
d J.im.> 0 v(t)
~ 0 ,
all t.
Q.E.D.
By some simple additional calculation, we then have
f
(8.5)
li.rn
_
lim
d ---> 0 Qg - d ---> 0
E
Q 'm(t)= (
0
:
,
A
>
=
<
•
1
CHAPTER III
NON-ASYHPTOTIC CONSIDERATION OF BAYES '!WO-STAGE TEST
9.
Further Properties of Second Sample Size Function.
Throughout this section we shall regard the parameters m
and d to be arbitrary, positive, fixed, m, an integer, and consider
the second sample size and its related functions in tenls of their
dependency upon the loss ratio, W '" Wl!t'10 and the ratio, Z =min(vl ,W )/c.
0 l
By theorem 1, we have for arbitrary fixed W, Z > 0, the
inequalities
,
(9.1)
+
+
,..,,. th e pro of 0 f 1 e~una,
3 t* - t* (W, Z) are the respective
where~~
_
unique solutions in t
to the equations
,
+,
and by (1.21) - (1.24), t-
+
= t-(W,Z)
are determined uniquely
as the positive and negative solutions in t to the equation
91
Let
,
y ... dt
then by (1.23)
1
(9.5) ft(?Jz (t» ... 2 log
dZ
2
vTn
-
I
Y' ... (y2+1)~-K(y,W)"'f(y,W,Z),say,
where for all y, all W~ 0
(2 10g(W+e-Y), 0::
2 log
r",r
+W e- Y] ...
-~-l
1
\2
w~ 1
10g(1~-Y),W~ 1
,Y?fJ
(9.6) K(y/oJ) ""
=(2
l2
log(l+W"Y) ,0 .:: W ~l
,~
log(}"Y)
,\4 .:: 1
o.
92
Note that for arbitrary fixed y, K(y,W) is a continuous function
of~.
-
For y > 0, it increases from -2y at W• 0, to a unique
maximum • 2 log(l+e- Y), at W• 1.
zero in the limit as W--->
00.
It then decreases, approaching
The picture for y ~
° is
immediately seen from the relationship
K(-y,W)
-
1
K(y,t:1)
This clearly holds for all y, all W > 0, and in the limit as
itl" 0,
(9.8)
00..
Hence, over the same range
K(
'y I ,0) ~ K(y,lV) :: K(y,l)
It follows, by (9.5), that for arbitrary fLxed Z and over the
same range of values of y and W,
and
(9.10)
-
-
f(y,l,Z) < f(y,1v,Z) < f(
Iy I
,O,Z)
93
It may be inferred from this, and from the description of K{y,W)
that for fixed y,Z, f(y,W,Z) has a unique minimum at W .. 1.
By (1.22), the R.H.S. of (9.3),
(9.11)
and this function is clearly independent of W.
Now
(9.12)
are the unique positive and negative values of y at which each
member of the family of curves f(y,W,Z) intersects the curve g(y).
Hence by (9.9)
...
(9 .13)
-
.
t-(W,Z) ,. -t"'{~, Z) ,
w,Z > 0
Unique intersections at finite positive and negative values of y
occur also in the limit as W---> 0,
00.
Now both upper and
lower bounding curves of (9.l0) are symnetric about y
intersections with g{y) are thus respectively at
= O.
Their
94
By the nature of g(y) already described in terms of the variable
-
t (see (1.22», it then follows that for Z > 0, W > 0, and in
the limit as W- >
00
(9.15)
t+(l,Z) < ~ t-(W,Z) < t+(O,Z)
+
-
-
For arbitrary fixed Z > 0, t+(W,Z) has a unique minimum and
t-(W,Z) has a unique maximum, at W• 1.
Both functions are
monotone for W< 1, and monotone (in the opposite sense) for
W>
1, and both can be shown to be continuous
By (9.1),
in W.
+
we then see that t*(W,Z) are certainly bounded functions of W.
The function v*(t)
= v*(t,W,Z)
is defined by (1.26) to be
the larger root in v of the equation
(9.16)
,
+
and the unique root in v of this equation when t = t -.
Let
,
Z
::II
dZ
212it
J
95
then by (1.16), (1.17), (9.6), we
2
(9 .18) C (.;) + log
'V
z
t
+v
m~
write this equation
2
• 2 10 g
z - 2C
It I
z
t
-K( 2C i'W)
Let
t
Y - -z
,
x
v
= '"'"2
z
then the equation becomes
(9,20)
.!(Cx +
x
Iy I
)2 = -log x - K(2Cy,W)
If we apply the transformations (9.17), (9.19) to both sides of
equation (9.3) and then solve for y, the two solutions in y which
are obtained will be
(9.21)
•
96
+
where y-(W,C) are functions of Wand C only.
(9.22)
v*(t,W,Z) •
We then have that
2
z x(y,W,C) ,
where x(y,W,C) is the larger root in x of (9.20) whenever
and the unique root in x for v at the endpoints of this interval.
In keeping with the definition (1.26), we define x(y,W,C) to be
identically zero for all y outside, and not bordering, the
interval (9.23).
For arbitrary fixed C > 0, let S(C) be the region in the
y,W plane, W ~ 0, defined by (9.23) and including the bounding
+
curves y
= Y-(';J',C). It
can be shown by some simple arguments
which involve the equation (9.20), that x(y,W,C) forms a
continuous bounded surface over S(C).
For fixed Y, the surface
is monotone in VI, decreasing when 'H < 1, increasing
~lhen
W > 1.
For fB,ed W, its behavior as Y value may be inferred, by (9.22),
from LmJmas 1 and 2.
Now for arbitrary Z > 0, let SI(Z) be the region in the
.
+
t,W plane, W~ 0, buunded' by the t axis and the curves, t - t-(W,Z).
97
Reversing the transformations (9.17), (9.19), we have, by the
above, that v*(t,W,Z) is continuous and bounded over 8'(Z).
For fixed t, it is monotone in W, decreasing when W < 1,
increasing when W> 1.
As a function of t, for fixed
W, it is
described by lemmas 1 and 2.
In addition, we note that by (9.20), (9.1)
x(-y,W,C) -
1
x(y,~,
C)
so that by (9.22)
v*(-t,W,Z)
=
v~~t,
1
W'
Z).
The sy1metry of v* in t when W = 1 is just a special case of (9.25).
On the other hand, using the results of section 2, in
particular, the identity (2.38), (2.39), it is easy to show that
for arbitrary fjxed t,W in the region 8 1 (Z), v*(t,W,Z) is a
continuous increasing function of Z, all Z > O.
(9.26)
1m
Z ---> 0,00 v*(t,W,Z) - 0 ,
In addition by lemraa
4,
Further,
00
for arbitrary fixed W, t+(W,Z) increases,
t-(W,Z) decreases, continuously with Z.
By (2.10), 8'(Z) reduces
98
to the positive W axis in the limit as Z - > 0, while as Z - >
it becomes the entire upper half t,ll plane.
00,
To relate the second
sample size function to the result (9.22), we apply the transformations (9.17), (9.19) to the L.R.S. of both equations (9.2).
The
unique solutions in y obtained will be respectively
+
+
y*(W,C)
(9.27)
-!z t*(W,Z)
,
-
+
where y~~(W,C) are functions of Wand C only.
By theorem 1, the
second sample size function may then be written
-=
(oz2x(y,W,C)
*
t
,
y (W,C) < y < Y (\l,C)
,
otherwise
(9.28) v(t,W,Z)
Finally, it can be shown that the points of discontinuity
+
of the second sample size function, t*(W,Z) are continuous in
W and Z.
First,
00
00
1 2
e-dt
e
dy+-
2Y
I'm
1 2
"2Y dy
e
99
-
is clearly continuous in v, t, W, and Z; v, Z > 0, W > 0,
all t.
v*<t,
w,
Z) is positive and continuous in t, W, Z
everywhere in
[Ct, w,
Z)a t-(W, Z)
~t ~
t+(W, Z), Z > 0, W ~
01 .
/
Hence Gv*(t)(t) is continuous in t, 1~, Z over this range.
As was indicated in the proof of lemma 3, Gvit(t)(t)
is Jl1Onotone decreasing in t for any fixed lv, Z in (9.30).
We may regard this function, tor fixGd Z as a continuous
surface over the t, W plane.
The trace of this surface in
the plane of height 1 above the t, U plane will because of
the continuity of the surface, its monotonicity in t for all
-
1'1 > 0, and because of (9.1), describe a curve which determines
a single valued continuous function of ~,. This curve will,
'i
by (9.2), of course, be t (VI, Z). Exactly the same argument,
with Z and W interchanged, shen'ls the oontinuity and single
*
valuedness of t (W, Z) in Z.
Silllilarly, since e dtGv*(t)(t)
is monotone increasing in t for all fixed W, Z in (9.30),
~
we oan infer the continuity of t (U, Z) in W and Z.
100
10.
Some
E~loratory
CODi!tations in the
~ymmetric
Case.
To obtain a more exact idea of the behavior or our
t,.,o-stage Bayes solution for intermecliate valuss ot the
parameters, we investigate, by means of computation, the
symmetric case ot equal 10s8es and equal apriori probabilities.
i.e. in this section, we shall cons1der that
W
W-J-l,
o
Since
W8
are takina W• 1, we cease, tor eonwnience, to
indicate it as an argtRent in tunctions dependent on it.
We shall, hOl-leVer, l-rhere necessary tor unclerstanding, indicate
functional dependence upon other parameters.
When W • 1, the solutions to equations (9.2), (9.3)
are Nspeotively symmetrioal about t
(10.2)
t+ • - t- • ~ ,
::I
O.
say, t t .. - t
1. e.
*
-
t * , say ,
and both tMB4l ftUIIberc -d8pand ~ upon Z and d.
By
(9.21), (9.27)
.J
,.10.,3.
)
1
+
-; ~ • 'Y '" -
-
ill(
•
'\"1' say,
'*
iIt* · Yt. •• Y • Y*, say •
101
i~
V1 and V' depend only on the parameter C defined by (9.17).
By (9.1), lore have, indicating the functional dependence on C,
(10.4)
•
From section 9, lore have that by using the t: ansformation
4
(10.5)
,
= 2CV
y • dt
equation (9.3) may be written
The L. H. S. of (10.6) is a continuous, increasing, concave
function of I y I which ...
is independent of C.
-00, 00,
as
Y ......
0, 00,
and which
Hence the solution of (10.6) in IYI ,
namely
(10.7)
approaches zero as C ... O.
Substituting this solution into
the L. H. S. of (10.6) and solving the resulting identity in
C for
v1 (C), we get
VICO)
.i e
.t + e
1
(e)
+
(0)
&
2
102
where
&
1
(0),
&
(c) ... 0, as C .... O.
Thus
2
Rewriting (10.6) in the torm
(lO.9)
1
1
·l+1)" + log t<4C2y2+1)":Z -lJ + 2101(1+8..2Ctt~ -Jt)g 202 -0
2Civt +(41J 2
we find that the L. H. S. is, tor fixed C, a continuous,
:Sncreasing, concave !'unction of t y I which ...
1"( I.... 0,
00,
-00, 00,
a8
while tor fixedy, it is a continuous increasing
function of O.
Hence the solution,
f y t•
Tl(C) is a
decreasing function of C. It follows, by (10.6), that
1
e""""}. is a least upper bound for VI(C),and hence, by (10.4),
j
an upper
bound for y*(C).
By (9.28), the second sample size function v(t),
~y
be written
M< "'*(0)
(10.10)
..
>
x.{ Y,e) is the larger root in
x of the equation
j
103
for all V such that I y
I y 1= Yl(O),
1< v1(0),
the unique root '£-men
and identically zero for I y
I> Yl(O).
Olearly,
this function depends on Y and 0, only and is symmetric about
Y = o.
ApplYing the transformations (9.17), (9.19) to the
L. H. S.Js of equations (9.2), we have, for W· 1,
00
2
(10.12) -2...(l+e Oy) x(O,y) + -
.f2n
1
J2n
1 2
20
Y
-"'2Y
e
dy + !...-
fOO '-if1 2
ffn
/
h-
e
h+
where
+
(10.13)
h- = 0
J'x{y,tn
Y
-
+
Setting (10.12) equal to 1, we obtain y*(O) as the unique
solution
in y.
Our decision rule for deciding between QO and 91
may now be written: Take m observations. If the observed
sample is Xl"."
v=--Zt
Xm, form the function
a
1
-
Z
r m~ JC.
_
1a l
1
1 m (QO + 9 ) ]
- -2
1
.
dy ,
104
If y ~ -y*(C), accept QO'
ever,
I
y
1< ylf-(C),
If y ~ y*(C), accept 91 ,
take v
:I
If, how-
z2x (y,C) additional observations.
If, now, the total observed sample is Xl"" ,Xm+v , from t he function
If this is negative, accept GO' positive, accept 9 , We need not
1
consider the case t +
mv
= 0,
since this has probability zero,
Table I
C
y*(C)
.01
.2536
.10
.2535
1.00
.2373
3.00
.1833
5.00
.1.472
10.00
.J007
20.00
.0644
50.00
.0334
100.00
.0196
105
In table I, we give, tor some selected values ot C, the values
ot the function "(*(C) rounded at the 4th decimal place.
These
numbers were obtained by setting (10.12) equal to 1 and using
this equation in conjunction with (10.11).
Standard tables,
which are listed in the bibliography, were used for this and
the remaining tables in this section.
From these results, it
would appear that y*(C) 1s a decreasing function of C with L.U.B.
less than .26 which is well below the upper bound given by (10. 8) .
Table II:
y/C
o
.01
1
3
x(y,C)
,
10
,0
100
.249994 .203888 .100859 .0,82611 .023601, .00194291 .000602768
.01S .249769.20.3663 .1006.34 .0580360 .0233768 .00173058 .000414810
.030
.249092 .202967 .0999,78.0,7.3602 .02770,4 .00115491
.045 .247961 .201855 .0988263.05623lO .0215930
.060 .246.367 .200262 .0972332.05464.30 .0200387
.075 .244304 .198191 .0951618.0525847 .0180149
.090 .241757 .195649 .0926143.0,00341 .0154112
.105 .238711 .192600 .0895485.0469495
.120 .235l44 .189027 .0659346.0432499
.135
.231031 .184902 .0817166.0387638
.150 .226338 .180186 .0168041
.165
.221021 .174829 .0710398
106
Table II (continued)
y/c
.01
1
3
10
5
.180
.215025 .168762 .0641129
.195
.208277 .161888
50
100
.210 .200613 .154065
.225
.192071 .145010
.240 .182252
y*(C) .172008 .136558 .0623741.0342506 .0129849 .000950055.000285660
In table II, we have tabulated, for some of the vauee of C
in table I, and for values of y, from y = 0, at intervals of
length .015 up to y
= y*(C),
the function, x(y,C).
Entries were
computed from (10.11) to six significant figures, the last figure
being rounded.
The expected value of the second
s~lp1e
size function and
the probabilities of wrong decisions may be easily found in terms
of integrals which involve x(y,C).
In this symmetric case, we
have, of course,
(10.14) EQ vet) = Eg vet) = Ev(t), say; Q
OleO
= l-Q
91
= Q, say.
107
Also by (10.1)
W
(10.15)
A--=1
g
By (5.1), (5.2), we have, since vet) is symmetric about
t :: 0,
t*
(10.16)
Ev(t) ::
1
~
[
11
vet) e
- 2m( t + ~d)
2
dt •
-t
Applying the transformations (9.17), (9.19), and the additional
transformation
(10.17)
we get after some simple manipulation,
y*(C)
2
(10.18) d Ev(t)
~
8c 3 ----- e
r ;rn
lr2J .
-i(fY)
x(y,C)cosh(cy) e
~
o
2
dy
•
108
On the other hand, by (6.9)
.(10.19) Q=--L
/2ilm
f
.1..( t
e 2m
Imd)
~
2
dt"
1
2n
t*
where hQ (t) is defined by (6.7).
o
j
rm
t
00
-l~
1 (t+1md)
e 2m 2
-t*
2/
00
.1y2
e 2 dydt,
hQ (t)
o
Applying the same transformations
as above, we arrive at the following result
dr.
Clearly, (10.18) and (10.20) are functions of C and
r,
only.
We now compare the two-stage tsst of 9 v.S. 9 which is
0
1
defined by the second sample size function (10.10) and the decision
function (1.8), with the analogous one-stage Bayes solution and
109
•
with the sequential probability ratio test, in terms of expected
sample size, requiring of these tests that they have error
probabilities equal to Q.
By (10.15), (7.5), the probability that the one-stage
solution accepts Q when Q is true, or vice versa, is given by
l
O
00
1 2
Y
e2
(10.2l)
where
n is
the one-stage sample size.
dy ,
Setting this equal to Q
gives us
1
.!2
(10.22)
where X (a), 0
~
.:l~2
W!
=X(Q)
=X0'
say,
a ~ 1, ts defined by
00
(10.23 )
1
-ffn
f
X(a)
12
e 2
dy= a
- - y
110
Thus
(10.24)
Now consider the sequential probability ratio test of Q
O
Wa1d ~3_7
v.s. Ql' which has symmetric error probabilities, Q.
gives the following approximation (a lower hound) to the expected
number of observations required bv this test.
(10.25)
-
2
E = ~ (1 - 2Q) log
d
l-Q
~
.
Multiplying both sides by d 2 , we have
(10.26)
Since, as we have shmvn, Q is a function of C and
r,
that (10.24) and 10.26) are also functions of C and
only, it follows
r,
Now our two-stage procedure is a Bayes solution.
average risk
(10.27)
only.
Hence the
111
is a minimum among all two-stage tests with first sample of size
m.
When
,
(10.263)
i.e. when (10.1) is satisfied, the average risk is
(10.29)
c
t m + Ev ( t) ]
+
t:1 Q
,
and this is minimum among all symmetric two-stage tests with first
sample of size m.
By continuity results of the previous section, Q, (10.19),
is seen to be a continuous function of the parameter, Z, other
parameters being fixed.
By the asymptotic results of section 6,
we have that
(10.30)
Now
(10.31)
lim
Z ->
0 ". 0
00
~
112
Hence, Q is a continuous function of C, and
lim
(10.32)
C --->
00
Q
=0
•
On the other hand, by (10.20),
00
(10.33)
I
lim
Q =.1...C -> 0
I'8i
1 2
"2 Y dy.
e
r
It follows that for fixed
r,
given any a in the interval
00
(10.34)
o
f
1
<a<-
-.pn
e
1 2
-'2 y
dy ,
r
l'11e can always find a value of C such that
(10.35)
Q(c,r) = a
Since r > 0, this implies that, given any a in the interval
o ~ a <~, we can always find a pair, (C,r), such that (10.35)
113
holds.
Let (Ci~,r*) be a particular pair such that (10.35) holds.
Recall the definition of Z (1.18).
In the symmetric case,
(10.28), this is
Z=~c
(10.36)
Now let
W*,
0*
be any two positive numbers such that
(10.31)
w*
*c
*
"'" Z
"'"
4C*l'?R
d
Also, let m~(o, diE- be any two positive numbsrs, the first, integral,
such that
(10.38)
'*
"2'1 d* \1m'" r~r
Consider the two-stage Bayes procedure with starred parameters and
let E'>kv(t) be its expected second sample size.
is, of course, a.
The error probability
Let S be any other two stage test with first
sample of size m* for deciding between go and Q + d*, ESn, its
O
expected total sample size at these two points, and a(S), the
114
probability of wrong decisions in its 11se, and supnose that
(10.39)
a(S) :: a
•
Since we must have
*
c* 1-*
'-m + E ,,( t)
(10.40)
]
+
tv*C&
::
*
c*S
E n + W a( S)
,
it follows that
*
*
(10.41)
S
m +Ev(t):;En
Thus, the Bayes solution in the symmetric case provides a lower
bound for the average number of observations required by double
samplin~;
schemes with fixed first sample size.
Table III
c
r
Q
.01 .05 .4800
.25 .4013
.4 .3446
.5 .3085
2.5 .0062
2
d (m+Ev)
.010337
.250471
2...
dn
.010055
.250055
2
d i
11.
R
B
R
.006399 .99489 1.563 .014
.157995 .99918 1.582 .036
.640442
.640056 .399752 .99991 1.601 .042
6
.618047 .99995 1.618 .041
1.0 32
1.0455
25.0832
25.0375 10.0248 .99997 2.494 .055
115
Table III (continued)
r
C
•
4.
5.
1.
2
d (m+Ev)
2dn
.04317 64.01015 64.0483
12
6
.0 267 100.0 14 100.0480
d
2E
20.7166
30.1290
R
1
R
s
R
6
.999999 3.069 .0 2
6
.999999 3.319 .0 1
.05 .3258
.815
.816
.507
.999
1.609 .002
.25 .3249
.7271
.8251
.5125
.881
1.419 .314
.4 .2961
.9466
1.1412
.7060
.825
1.305 .455
.5 .2709
1. 2389
1.4885
.9071
.832
1.366 .429
.1455
4.0842
4.4613
2.5110
.915
1.621 .193
1.5 .0621
9.0302
9.4561
4.7562
.955
1.699 .091
16.0097
16.4543
1.3322
.973
2.183 .049
2.5 .0056
25.4535
10.1596
.962
2.461 .030
4.
25.0025
.04298 64.0411
.06270 100.0610
64.4529
20.8385
.993
3.071 .010
100.4505
30.247
.996
3.306 .006
1,
2.
5.
3
Q
.0213
.25 .1722
3.5420
3.5763
2.0590
.990
1.720 .023
.4 .1742
3.2609
3.5172
2.0280
.921
1.608 .172
.5 .1695
3.1822
3.6563
2.1006
.670
1.515 .305
.1061
4.8506
6.2281
3.3592
.719
1.444 .480
1.5 .0477
9.3113
11.1271
5.4181
.837
1.719 .318
16.0982
18.0900
7.8753
.890
2.044 .195
1.
2.
.0161
116
Table III (continued)
r
C
Q
d2(m+E\I)
2-
d n
d2i
R
1
R
s
R
(
2.5 .0046 25.02,6
.04242 64.0312
4.
5.
5
10.6373
.924
2.353 .125
66.0531
21.2609
.969
3.010 .046
102.05
31.06
.980
3.220 .029
.25 .11.41
5.79
5.81
3.16
.997
1.830 .008
.4 .1167
5.4977
5.6807
3.1036
.968
1.771 .071
.5 .1169
5.2362
5.6725
3.0997
.923
1.689 .170
.0819
5.8002
7.7577
4.0430
.148
1.435 .521
1.5 .0385
9.6116
12.5018
5.9399
.113
1.628 .432
16.2134
19.4130
8.3032
.835
1.953 .288
2.5 .0039 25.0558
.04204 64.0327
4.
.06187 100.0524
5.
28.3615
11.0144
.883
2.275 .191
67.3174
21.5943
.951
2.964 .013
103.30
30.99
.968
3.227 .046
1.
2.
10
.06220 100.0511
27.0730
.0138
.4 .063
9.28
9.34
4.14
.993
1.959 .01.4
.5 .0641
8.951
9.261
4.616
.961
1.915 .066
.0525
1.892
10.507
5.177
.751
1.525 .491
1.5 .0268
10.510
14.897
6.198
.106
1.546 .542
.0100
16.487
21.653
9.008
.761
1.830 .409
30.500
11.638
.824
2.159 .287
69.402
22.144
.922
2.890 .1l4
31.52
.949
3.113 .073
1.
2.
2.5 .0029 25.128
.04155 64.0362
4.
.06143 100.0555
5.
105.31
117
In table III, we have tabulateo the function,
(10.20),
(10.18) plus d2m = 4r 2, (10.24), and (10.26). Computations
were made for five values of C and selected values of
t61
Gregory's formula of numerical integration
r.
was employed
to evaluate the integrals in (10.18) and (10.20).
All entries
are rounded at the last decimal place given.
~lald and ,{rlolfowitz
11J
(see introduction), have shown
that the sequential probability ratio test minimizes,
simultaneously'at go and"gl ' the
eh~ected
number of observations
among all sequential tests v1i.th the same or smaller error
probabilities.
Both one and two-stage tests may be regarded
as degenerate sequential tests.
Also, the one-stage test may
be regarded as a degenerate two-stafe test.
We thus have that
E< m + Ev(t) < n
(10.42)
and this inequality is clearly evidenced by the results in
table III.
To compare the two-stage procedure
l~ith
its one-stage and
sequential probability ratio test analogues, we have computed
the following two expectation ratios
(10.43)
,
R
s
= m + Ev(t)
E
118
If we consider the IIdistance ll (using expected number of
observations as criterion) between the one-stage and sequential
probability ratio teats to be 1, we can use the measure
n-
rm + Ev(t) 1-
R =....
(10.44)
n- E
to indicate the fraction of this IIdistance II which lies between
the one and tHo-stage tests.
These l1lAasures of comparison have
been tabulated in table III.
In decisions between two simple alternatives QO'
~l'
the problem usually specifies the distance, d, between QO and Ql'
as trell as the probabilities, a, ~, which indicate the degree of
allowable error.
Table III, though inadequate for more
exhaustive investigation, provides some insight into the behavior
of our two-stage test, under these restrictions, in the symmetric
case, a
=~.
Ex~lination
of table III indicates that for fixed 0,
the expected total sample size has a minimum w.r.t. f, and is
monotone on either side of the minimum.
This minimizing value
of f, call it fCC), appears to increase with increasing C,
while the corresponding value of Q decreases.
If this can be
taken as representing the actual case, then the "bast" parameter
point, (C,r), is of the form (C,f(C», where C satisfies the
equation
Q (c,r(o» = a
,
119
and a is the desired error probability.
If we make allowance for
the inadequacy of the computations, the following table indicates
the situation, approximately.
TABLE IV
Approximate
C
r{c)
Q
.01
.05
.48
1.
.2,
.32
3.
.5
.5
.17
5.
10.
.11
.05
1.
For a fixed value of d, selection of r in this way would insure
that 'toTe took the rest value of the first sample size, m.
Practical use of these tables of course requires further
•
extension and greater accuracy in table IV, which in turn
requires extension of table III to a ppropriate values of C and
r.
HO'l1ever, lore may illustrate the above by an example.
Suppose
we desire to test Q against Q +,1 with a probability of error
O
O
approximately equal to .05. Table IV indicates that C .... 10 and
r
= 1 (the approximate value of
r for 't-Thich m +
Ev( t) is minimum
at C = 10) have associated with them the desired error probability.
120
Since d = .1, 'tie would take a first sample of size,
m .a
4(1)2 •
.01
400
Consulting table III, we see that the teat
total of approximately 789 observations.
~dll
require an average
In terms of this
expected number of observations, the two-stage test would lie
about one-half of the way between the analogous one-stage and
sequential probability ratio tests, which would require
approximately 1051, and on the average 518 observations, respectively.
A comp~isQP of computed and asymptotic formula values of
x(y,C) indicates a fairly close correspondence between the two
for the larger values of C.
of x (table II) at y
= 0,
e.g., at C
= 100"
the computed values
.015, .0196 are, respectively, .0360,
.0341, .0329, while the corresponding values obtained by using
(4.26) are .0352, .0333, .0321. Comparison was also made at
C = ,0 and C = 10, where the correspondence was found not quite
e.g., at C • 10" y • 0, table II gives .02360, (4.26)
gives .01693. Computations of d2Ev and Q were made only up to
so good.
C ... 10. At this value, computed and asymptotic formula values
of d2Ev are reasonably close for r ~ .5. e.g., for r = .4, .5,
2
table III gives d Ev = 9.3, 9.0, respectively, while formula
(5.36) gives 8.8, 8.4. For larger values of r, larger values of
C are needed for reasonable comparison.
The leading terms of the
121
asymptotic formula for Q (6.46) are independent of r and give
for C =- 10; Q z:; .036.
'Ibis is not too bad an approximation
< lS (see table III), but for larger values of r, C
is again too small for a reasonable comparison.
for
r
=
10
The results obtained here are, with an exception
indicated below, not directly oomparable to other two-stage
prooedures discussed in the literature.
Owen
CIJ
has
proposed a double sample test, whioh at least for the example
cited above, seems almost as good, in terms of expected number
of observations, as the Bayes test.
For« =
.05, a first
sample one-half the size of the analogous one-stage sample,
is taken.
Depending on its outcome, following a certain
intuitively proposed rule, one or the other alternatives may
be acoepted, or an additional sample, equal in size to the
analogous one-stage sample, taken, and then the decision
betl'1een alternatives made.
Computations in the paper, indicate
that in the symmetric case, the ratio of expected total sample
size to the analogous one-stage sample size will be .757
for this procedure.
This is but slightly higher than the
value, .751, 'l'1hich may be obtained for this ratio from the
above example.
As indicated by (10.41), the Bayes solution in the
symmetric case provides a lower bound for the average number
122
of observations required b1 double
sml~ling
schemes with fixed
first sampling size and in this sense it is best for deciding
between two simple alternatives.
The laborious computations
necessary to exhibit these tests, their average sample sizes,
and error probabilities, with sufficient accuracy for practical
application, is a serious drawback to their use.
•
123
BIBLIOBRAPHY
1.
2.
0\-1en, D.,
"A Double 8am:ple '!!est FTocedure, II Annals of Hathematical Statistics, 24 (1953), 449-457.
•
Rosser, J.B., Theory and
~lication
?! je
1 2
x
-~
o
dx,
N~w
York,
New York" t1ap1et.on House, (1948), 41-43.
3.
vlald, A.,
~equential
Analysis, New Yorle, John Hiley and Sons,
1947.
4. Hald,
A. and lIolfowitz, J., 1I0ptir,mln Character of the Sequential
Probability Ratio TGst, II Annals of Mathematical Statistics, 19 (1948), 326-339.
5.
TJa1d, A.,
Statistical Decision Functions, New York, John
and Sons, 1950.
~liley
6. Tlhittaker, E. and Robinson, G., The Calculus of Observations,
London, B1ackie and Son-1~~ited, (~4), 143-145.
7. Tab1Gs of Circular and dxr,rbo1ic Sines and CoSines, Federal
Horks Agenoy,
~j9.
8.
Tables of the 1b:ponential Funotion, eX, National Bureau ot Standards ApplIed ~·;ratfiematics "Series 14, 1951.
9.
Tables of 1\1atura1 Lo:t;arithms, 3, 4, Federal viorks Agenoy, 1939 •
10.
Tables of i~ormal Probability Function, National Bureau of i;)ta11. .
dards Applied ~athematics Series 23, 1953.
.