Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
• •• UNIVERSITY OF NORTH CAROLINA Department of Stat~stics Chapel Hill, N. C• ~a ,." . h;'l~SE:QUEN.I'IAL BAlES PROCEDURES FOR CCMPARI11G TWO ~ PARAMETERS WHEN OBSERVATIONS ARE TAKEN IN PAIRS . •. '_"'_ -0- / by Sudhindra N.Ray June 1963 This research was supported in part by Public Health Service Research Grant GM-10397-01, from the Division of General Medical Sciences, National Institutes of Health. Institute of Statistics Mimeo Series No. 369 • TABLE OF CONTENTS CHAPTER PAGE v ACKNOWLEDGME1'l'TS - - - - I INTRODUCTION 1.0 Summary - - - - - - - - - 1 - - - - 1.1 Statement of the problem 1.2 Existing methods - - - - - - 1 2 - - - - 3 1.3 Criticism of the existing procedures - - - - 10 .- 1.4 A decision-theoretic formulation - 13 1.5 Reduction of the original problem 20 1.6 A decision-theoretic formulation for 25 the binomial problem - - - - - - - - II 1.7 Review of some earlier relevant works 28 1.8 Chapter by chapter summary of the thesis 29 BAYES PROCEDURES WITH LINEAR LOSS STRUCTURE - 33 2.0 33 Summary - - - - - - - - - - - - - - 2.1 Some well-known results about Bayes sequential procedures with a general model - 33 2.2 Binomial model - - - - - 38 2.3 Beta prior distribution 44 2.4 Determination of the point of truncation - 47 boundary in the binomial problem - - - - - - 56 in the binomial problem - - - - - - • 2.5 Procedure for obtaining the optimum iii • PAGE CHAPTER 2.6 Some points useful for computation of the optimum boundary - - - - - - 59 2.7 Trinomial model - - - - - - - - - 62 2.8 Dirichlet a priori distribution - 64 2.9 Determination of the points of truncation in the trinomial problem - - - - - - - - - - 65 2.10 Procedure for obtaining the optimum boundary in the trinomial problem - - - - - - - - 71 2.11 Some points useful for computation of the III -- optimum boundary- - - - - - - - - - - - - 73 BAYES PROCEDURES WITH MODIFIED LOSS STRUCTURE- 76 3.0 Introduction and summary - - - - - - - - 76 3.1 Characterization of the Bayes procedure With a general model IV - - - - - - - - - - - - 77 3.2 Modified loss structure With absolute deviation cost 86 3.3 Binomial model 88 3.4 Trinomial model - - 93 ~OTIC 96 BEHAVIOUR OF OPTIMAL SOLUTION - - 4.0 Introduction and sUlDIIlary 96 4.1 Binomial problem - - - - 98 4.2 Series expansion for the optimal solution for large x (binomial problem) - - 4.3 Trinomial problem - - - - - - - - 107 - 114 4.4 Series expansion for the optimal solution for large x (trinomial problem) - - - - - - 122 iv PAGE CHAPTER v BOUNDS ON ASYMPTC'JrIC SOWTION - - - - - 5.0 Introduction and summary - 129 129 5.1 Binomial problem with absolute deviation cost - - - - ·e 132 5.2 Trinomial problem - - - - - 141 BIBLIOGRAPHY - - - - - - - - - - - - - - - - - - 150 v .. ACKNOvlLEDGME~'TS I am deeply indebted to Professor William Jackson Hall for proposing the problems in this thesis, for his numerous suggestions and comments and above all for his unfailing confidence in me even in those bleakest moments when without his sympathetic encouragement and personal warmth I could hardly proceed. I wish to express my sincere thanks to Professor Wassily Hoeffding, Professor Walter L. Smith and Professor Norman L. Johnson who read the manuscript and offered many helpful suggestions. I wish to thank the Government of India and the United States Public Health Service for the financial support. I am most grateful to Mrs. Doris Gardner for her quick and accurate typing of the manuscript. Also, my sincerest thanks to Martha Jordan for her help in guiding me through the labyrinth of rules, regulations and assorted pitfalls. CHAPTER I I:NTRODUCTION 1.0 Summary In section 1.1 the basic problem of the thesis is stated, namely that of comparing the parameters of two Bernoulli processes from paired samples. Section 1.2 contains a brief survey of eXisting methods of solving the problem, most of which are based on the Neyman-Pearson theory. In section l.~ same shortcomings of the existing methods are dis- cussed. In section 1.4 a decision-theoretic fo~ulation of the basic problem along with same realistic specifications of loss and cost is proposed. In section 1.5 the original problem of comparing two binomial probabilities is reduced to that of comparing two cell probabilities in a certain trinomial problem. Section 1.6 contains a corresponding fo~ulation for a single Bernoulli process. . Section 1.7 contains a brief review of some results that are relevant for the analysis of the various. decision problems fo~ulated. Finally, a brief chapter by chapter summary of the thesis is presented in section 1.8, which is concluded with a schematic table of contents. 1.1 statement of the problem. Consider two independent data-generating processes. Each process yields a sequence of independent observations of dichotomous type, say, successes and failures. Suppose also that the observations are obtained in pairs from the two processes. Let Pl and P2 bilities of a success for the two processes. be the unknown proba- NoW consider the problem of selecting the process with the larger success probability on the basis of a sequence of pairs of observations. The method of sampling being sequential, our primary problem is that of obtaining a good sampling rule that tells us when to stop sampling. The statistical formulation as stated above is typical of many practical problems arising in various fields of application. In clini- cal trials, for example, one may wish to compare two alternative treatments or drugs, say T l and T • 2 Observations may be obtained in pairs, one member of each pair for each treatment. Suppose they are catego- rized into successes and failures according as the patients are cured or not, or say, according as the blood pressure readings of the patients are below a certain level or not. Here Pl and P2 are the unknown probabilities of a success with the treatments T l and T 2 respectively. Similar situations arise in industrial production processes where the effectiveness of two alternative production methods say, are to be compared. P l and P2 ' The effectiveness of a process is measured in terms of the proportion of effective units produced" all units being classified simply as effective or defective. knO'WIl Here Pl(P2) is the un- probability of a unit being effective when it is produced using method Pl(P 2 ). For the sake of a descriptive terminology" but without any loss of generality" we shall sometimes refer to the tvo situations described above. 1.2 Existing methods. The formulation stated in the previous section ial a$ yet" ~lete. The specification of further details including the ultimate objective to which the results of the investigation are to be put makes all the difference in the particular approach to be used in analysing such data. In a:llllost all of the existing methods of analysis" this problem of double dichotomies l has been posed as a hypothesis testing problem in the framework of the Neyman-Pearson theory. A very brief review of the important existing methods is given below. (a) Fixed sample size procedures. Perhaps the most common method is the classical one of taking samples of fixed size and analysing the sample proportions of successes as if they were normally distributed" using a one-sided or a two-sided test of the difference between the two means. The two-sided test is, in effect, a chi-square test. lwe have used the term 'double dichotomies' in the general sense as used for example in Wald r397* and not in the restricted sense of Barnard £17* · - - *The numbers in square brackets refer to the bibliography. 4 Another common non-sequential method is to apply Fisher's exact test, as mentioned briefly below. The data is put in a 2 x 2 table as follows. Process I Success al Failure n .. a Total Process I I i a al + a 2 2n - a l - a 2 2 n - a2 l n n Total I .j 2n The exact probability of such a configuration, conditional on the marginals being fixed and the null hypothesis Pl = P2 being true, is given by the nypergeametric term • Probabilities of "more extreme" (depending on whether a one-sided or a two-sided test is wanted2 ) configurations are obtained similarly. The observed result is judged to be insignificant or not at a fixed level of significance, according as the total probability of the observed configuration along with more extreme configurations exceeds the fixed level, or not. The nypergeametric distribution being discrete, the size of Fisher's exact test may not attain the traditional levels of significance. The corresponding randomized version of the test ~27, p. l4~7 is a simi2Fieher's e3act test as described in ~18, p. 9§7 is one-sided. 5 lar test of Neyman structure and is essentially a uniformly most 1 against u > 1 powerful unbiased test of the hypothesis u ~ the one-sided case, or of the hypothesis u =1 for I 1 for against u the two-sided case, wbere u _ J~p_2__ El - 1 - plf i - 1'2 is called the ratio of odds and u O<u<co ' < , < > 1 according as 1'1 > P2 • Fisher's exact test is a conditional one in the sense tr..at in the proper "frame of referenc elf the marginal totals of the above table are kept fixed. 2 x 2 A different sort of conditioning may be applied to the data to obtain another exact, though not so efficient, test3 that is particularly relevant when only successes (or equivalently failures) are observable. Keeping the total number of successes fixed, the number of successes with treatment T l wi th the parameter is distributed binomially O<ill<l UJ > < 1/2 according as 1'1 > < 1'2. , The problem of testing the dif- ference between the two binomial parameters 1'1 and 1'2 is thus re-· duced to that of testing the departure of the single binomial parameter ill from 1/2. One-sided or two-sided exact tests of the hypothesis ill = 1/2 maybe constructed in the usual way. The methods described so far do not take account of the natural pairing of the observations present in our formulation of the problem in section 1.1. Each pair of observations coming. from the two 3This test is reminiscent of a test for the equality of two Poisson para- r27, meters p. 1417, and, along with its sequential analog, was brought to our attention by Professor W. J. Hall. 6 processes is one of the four types the term ~ SS, FF, SF, and FS.. We shall use pairs for those recorded as SS or FF and untied pairs for those recorded as SF or FS. The problem. of testing the equality of 1'1 and 1'2 may be reduced as before to that of testing the departure from the value 1/2 of another binomial parameter v defined as v u = =1+ii O<v<l ' the probability that an untied pair is an SF. that v - ~ 1/2 according as 1'1 ~ 1'2. , It is easily verified Wald ["39, 1'. 1017 pro- posed an exact non-sequential test for the double dichotomies using the above reduction. The test of course depends on a given pairing of the observations from the two processes. It is to be noted that any test, sequential or non-sequential, for the double dichotomies when based on the first type of reduction, depends only on the successes (or only on the failures) irrespective of \ any pairing, while when based on the second type of reduction, depends only on the untied pairs for a given pairing. Inasmuch as these tests fail to utilize all the relevant data they cannot be fully efficient. In constructing a test for the double dichotomies, it is, however, quite a standard practice to take account of any natural pairing of observations for some well-understood reasons also explained by Wald ["39, p. 10§]e It would be interesting to know the relative merits of the two types of reduction and to make a comparison between 7 the two associated tests as to their relative performances vis a vis .. other tests, viz., the classical test and Fisher's exact test for the non-sequential case. (b) Sequential procedures. In the absence of any natural pairing of observations the problem of testing PI = 1>2 sequentially may be reduced to that of testing w = 1/2 sequentially. Since by the very formulation of our problem we assume the eXistence of such a pairing and since most of the existing sequential procedures make use of this fact by basing the tests on the untied pairs we shall no longer consider any test based on successes alone. It is to be noted, however, that corresponding to each of the tests based on untied pairs to be described below w could have a test based on successes alone. The existing sequential procedures may be considered in two broad groups, according as the number of hypotheses ~onsidered being two or three giving rise to the so-called two-decision or three-decision approaches. (i) Two-decision approaches. Wald's sequential procedure 1 39, p. 102.7 for analyzing double dichotomies consists in selecting two parameter points v and vI (v0 < 1/2 < vI) (a, f' > 0, and two error probabilities a o and f' a + f' < 1) and then to set up a sequential probability ratio test (SPRT) of the hypothesis in the usual vre.y. H2 : v = v 0 against HI: v := vI 8 Since the v-parameter is not intuitively very meaningful Wald describes the test and its operating characteristic (ae) and average sample number (ASN) :f'unction in terms of the parameter u = v/(l-v)" the ratio of odds, considered by Girshick ~121 as an appropriate measure of divergence between the two parameters PI and P2. maybe possible for the experimenter to choose U and o (0 < Uo < 1 < u < ~) such that a divergence measure ~ l between. Pl and P2 are worth detecting. i = 0, Then Vi It u l Uo or ~ u l = Ui/(l + ui )" 1. Wald has also discussed the cone1derations that may govern the and u • Because of the symmetrical footing in which o l the two treatments stand With respect to each other in our problem" choice of -e U attention might be restricted to procedures with u IVi - 1/21 = (1 l. - u0 )/2(10 + u )0 s 5 . (say)" i = 0" 1. = l/U0 j i.e. With It may be veri- fied that the resulting procedure is then the same as that of Girshick If" moreover" the two error probabilities a" equal we arrive at symmetrical procedures. ~ are taken to be For this class of procedures, Hall ["25,7 has proposed a modifica.tion to effect a reduction in the ASN when Pl and P2 are (ii) approXima~ely equal. Three-decision approaches. Besides the two decisions of judging one of the two treat- ments to be better than the other, the third one of making no such judgment and proclaiming them to be the same for all practical purposes, is noV' introduced. This is the approach of the common two- sided non-sequential tests. Most of the procedures considered by 9 Armitage also have this feature. .. The three decisions PI > P2' PI < P2 v > 1/2, v 80 , 0 80 1 , 80 2 corresponding to PI::: P2' respectively correspond in terms of v to v; 1/2, < 1/2 respectively. Armitage £'2.7 has suggested a pro- cedure obtained by running simultaneously two one-sided SPRT's, viz., Testing H: v o = 1/2 against Testing H,..,: v = 1/2 - 8 0 c; = 1/2 + 80 , H: v = 1/2 0 • HI: v against The CC and ASN of this procedure might be obtained from the results of' Boer fl~7 who developed the theory for such procedures for the binomial parameter, just as Sobel and Wald 1 327 did for the normal mean problem. All the sequential procedures described so far (except Hall' s J:2~7) are ~ in the sense that they are not truncated (although they terminate with probability one). Armitage £!!:.7 has considered a class of restricted (truncated) procedures in Which sampling is continued until ene of three specified boundaries is reached leading to the three possible decisions about v and hence about PI and P2" He con- siders various types of boundaries and finds the performance of' the resuIting procedures in terms of the usual criteria of the Neyman-Pearson theoretic formulation. The sequential procedures described so far were concerned with the derived problem in terms of the single binomial parameter v are based solely on the untied observations. Darwin fl§} and hence bas con- sidered within the three-decision approach a symmetrical truncated procedure which utilizes the tied pairs without discriminating between 10 the two types of tied pairs. If (Xli I X2i ) is the sequence of pairs of observations where for a success \:i = for a failure i = 1, 2, ••• , M. = 1, k Zi S n = 2, and Xli .. X2i , i n n = I: Zi' i=l ..., M , = 1, 2, = 1, 2, ••• , M , then, Darwin's procedure accepts the three decisions according as where Sn crosses the boundaries Sn = ~, hand M are certain positive integers. aI' a , a 2 o Sn = -h and n:: M, He considers the problem of choosing hand M such that the procedure has a high probability of leading to the correct decision. We shall also pro- pose in section 1.5 procedures based on Z IS. n 1.3 Criticism of the existing procedures. As mentioned and explained by Armitage 1 5, p. 5"J.7, the first difficulty "may be caused by the fact that the measurements in which one is primarily interested are the probabilities of success P2' PI and whereas the sequential design is specified in terms of the derived quantity v". This difficulty becomes much more relevant when one considers a decision-theoretic formulation of the problem. it is agreed that v PI and P2' Once is the primary parameter of interest rather than all the usual decision-theoretic specifications regarding lOSS, cost, prior distribution, etc. might well center around v, a binomial parameter as considered in section 1. 6. 11 In all the sequential procedures mentioned in the previous section where the original problem is reduced to one about a single binomial parameter v, the ASN function as well as the terminologies 'truncation' or 'termination', etc., refer only to the untied portion of the sequence of pairs of all observations. The total amount of expected sampling can be obtained in each case by dividing the ASN by where 3{0 is the probability of a pair being tied. Thus the total amount of sampling before reaching a decision might be quite high if 3{0 is sufficiently large. The motivation behind introducing the class of restricted procedures is that unlike those of Wald and Girshick they have a relatively more attractive feature of possessing a known upper bound, but only on the number of untied pairs and not on the total number of pairs. Thus these procedures of Ami tage (as well as of Hall) are really not truncated in the sense that Darwin's procedure is truncated. It is qUite reasonable that the tied observations do not in· tuitively contain much, if any, information about the difference Pl - P2 and hence may not seem relevant for any test of Pl - P2. '1"betY cannot be discarded by sufficiency or invariance considerations, however, b:t;tt they certainly involve some cost. It might happen in all the sequential procedures mentioned above except Darwin t s that the available resources for continuing sampling are exhausted before a sufficient number of untied observations are obtained to reach a 12 decision. Also in Darwin1s approach, the choice of a particular pro- cedure with desirable power properties depends on the unknown probability ~o. It is true, of course, that in practice, the experimenter has a rough kno~71edge of the magnitude of shown by Armitage £5, p. 5!:!.7 ~. o and may utilize this, as in choosing a suitable seQ.uential plan. But tb±s is exactly what a Bayesian analysis attempts to do in a more, formal way. We may mention here that informal Bayesian analyses of double dichotomies are to some extent available. B;V assigning a prior distribution for the two parameters, one may calculate the posterior distribution after any amount of sampling, and discontinue the experiment when the posterior distribution seems adeQ.uately concentrated to permit a decision. Novick £3]} has discussed such approaches in the context of clinical trials. Finally, Anscombe £g7 has Q.uestioned the appropriateness for our problem of the very foundations on which the existing procedures of the previous section rest, viz., the operating-characteristic concepts of the Neyman-Pearson theory of tests. His basic tbesis in £~7 is that the eXisting procedures might be appropriate in devising a routine decision procedure that will work sufficiently well in the long run, but not so in a problem where we have to analyze, interpret and make some recommendations on the basis of just one series of observations. Whatever happened is relevant for our present action and not whatever could have happened but did not. Anscombe also criticizes the consideration of the third decision of taking no positive action. In some applications one has to recom- mend only one of the two treatments or one of the two production pro- 13 cesses to be used in the future. 1.4 A decision-theoretic formulation. Anscombe £37 suggests a way in which the problem should be tackled and illustrates to some extent with the corresponding normal distribution problem. He suggests a decision-theoretic formulation of the problem in terms of loss and cost functions, which is adapted and elaborated for our problem given below. (a) actions: that treatment The action 80 (80 ) corresponds to the recommendation 1 2 T (T ) is to be used with future patients. l 2 801 (802 ) is preferable if Pl - P2; g > 0 (b) costs per observation: functions, i.e., the cost of n (Q ~ Action 0). We shall consider only linear cost observations will be proportional to n, the constant of proportionality, denoted by C, being termed the cost per observation. When interpreted as the usual cost of performing an experiment, C may be regarded as a constant. There arise, however, certain situations where C might Mpend on the unknown parameter Q. We illustrate this possibility below. Suppose in the industrial process example that the effective i tams produced during the period of trial (to determine which one is the better process) are marketable. Suppose the monetary utility per effective unit (ViZ, profit) is, say k in some monetary unit. Had we know which one was the better process and allocated the two units to it, the expected utility would have been we expect only k(Pl + P2) 2k max (Pl' P2). by allocating one unit to each process. Each pair of observations thus costs (on the average) in unrealized profit. Instead 14 In the drug-testing situation, apart from the monetary costs of . experimentation, we incur a different type of cost peculiar to this situation. For each pair of patients allocated to the two treatments, Qne patient is subjected to the inferior treatment giving rise to a so-called ethical~, considered earlier by Anscombe £~7. Although it is difficult, if not impossible, to assess this cost in terms of any monetary unit, it may not be too unreasonable to assume that the ethical cost, thus involved, is "greater, the greater the difference in the efficacy of the two treatments. By similar considerations ~s in the previous situation, it turns out thattbe ethical cost per pair of observations is increasing in k IQ", where k Igl- Suppose, for simplicity it is is any arbitrary positive constant. We shall ignore in this situation any monetary cost that might be incurred in obtaining a pair of observations" as it cannot easily be combined with the ethical cost •. It is to be noted that the symbol k arising above in different connections is generic and stands for any arbitrary positive constant. For reasons to be explained in the next subsection we may take k = 1 Thus taking B. pair of observations as the new unit of observation, we shall be concerned with the two following special types of cost per observation, viz., Case (i): Case (ii): C{Q) C(Q) =1 , for all Q, = IQI, called constant cost called absolute deviation cost. 15 (c) losses due to wrong action: It is well-known that a testing of hypothesis approach on which some of the current procedures are . based,.· when formulated as a simple loss functions. decision-theoretic problem, involves only We shall consider below two different types of loss structure that may be more realistic. I. Linear loss structure: L(Q, a ) l (1.1) L(Q, a ) 2 ::: t: The losses for the respective actions are IQ J , if Q< 0; , if Q > 0 • , if Q < 0 , if Q > 0 • JO = l2A Q - . J It is instructive to wri te them as regret functions, viz., i The constant of proportionality 2A (the factor 2 = 1, 2. is introduced as a notational convenience to be understood subsequently) may be interpreted as same sort of index of the future use envisaged for the recammended drug or process (Anscombe f~7). Since it is the relative magni- tude of the loss vis a vis the cost per observation that matters in the ultimate analysis, the constant k in C(Q) , as mentioned earlier, can always be (and henceforth is assumed to be) standardized to unity, by adjusting the factor 2A. After such adJustment, the constant 2A may be interpreted as the number of future patients who will be the potential users of the recommended drug in the drug-testing example, or the number of future items to be produced by the recommended process 16 in the industrial process example. perhaps, in practice, the experimenter will be able to assess this constant, just as in the Neyman-Pearson-theoretic formulation he is required to assess the least difference in the parameters considered to be worth detecting. It will follow from the asymptotic theory to be developed in Chapter TV that the sensitivity of the optimum procedures to it diminishes as it increases. Anseombe £"~7 discusses how to make some guess about this constant in the context of the clinical trial situation. Anyhow, this type of loss structure has been considered by so many authors £11, 26, 29, 33, 217 to name only a few that it seems to be well-established in this branch of the statistical literature. II. Modified loss structure: We shall have occasion to consider a slightly altered version of the linear loss structure that is especially relevant in the clinical trial situation, as envisaged by Anscombe £~7. = Ql is the difference 2 between the unkno'WIl means of two normal popUlations 'With kno'WIl variances, and where Yn For the problem where Q - Q stands for the cumulative sum of the differences of n successive pairs of observations, Anscambe introduce. this type of loss structure as follows: ttpermps 2A should be assessed, not as a constant, but as an increasing function of Iynl/n, since the more striking the treatment difference indicated the more likely it is that the experiment will be noticed and so will affect the treatment given ·to the future patients. One way of introducing such a dependence of 2A on IYnlln is to 17 assess 2A + 2n as a constant, as though the sum of the number of patients in the trial and of the number of later patients directly affected by the trial were fixed. 11 If we denote this latter constant a.s 2A (WhiCh should not be confused 'With the earlier 2A in the definition of the linear loss structure), and if an action is taken after performing tests on n pairs of patients, then the numberof prospective patients to use the recommended drug is reduced to 2A - 2n. The constant of proportionality in the proposed loss function 'Will thus depend on sampling. n, the stage of Explicitly then, we define the modified loss structure as follows: .. ~ 2(A-n) JQJ e L(Q, alin) = 0 0 (1.;) ,~ ) L(Q, a ; 2 n) =\ 0 0 2(A-n)Q , , , , , , if Q < 0, n <A if Q ~ 0, n ~ Aj if Q > 0, n = 0,1,2, ••• if Q ~ 0, n j = 0,1,2, ... ; if Q> 0, n > A j if Q > 0, n < A • The considerations that led to the formulation of the modified loss function suggest that the absolute deviation cost combines more naturally 'With this type of loss structure. Hence the modified loss function 'Will be considered in Chapter III only in conjunction 'With C(Q) = IQ J and not With constant cost • • 18 Remark: It seems worthwhile at this stage to point out an interesting connection between the decision-theoretic formulation of our problem, with the modified loss and absolute deviation cost, and the classical two-armed bandit problem with a certain restricted class of strategies as considered by Vogel £;9. The original two- armed bandit problem is concerned with devising a sampling plan (i.e. prescribing a policy of selecting one of the two possible binomial experiments with unknown probabilities of a success, Pl and P2' at each of a fixed total of 2A experiments) in order to maximize the total return. Vogel considers, and also gives some justification for doing so, a class of strategies as follows. In the first 2m steps each of the two kinds of experiments, I and II, say, is performed an equal number, N, of times. formed either with I results of the first The remaining 2A - 2m steps are per- or II alone selected on the basis of the N pairs of observations. Nowihe decision whether to continue taking one more pair of observations or to discontinue taking pairs and performing the remaining steps with I is made with the help of a sequential procedure, 0, say. or II, N is thus a random variable bounded above by A. If E(N) denotes the expected number of pairs of steps before deciding to continue with a single type of experiment, if Y£l - 17 denotes the probability of continuing with I£I]} when the procedure o is used, then the total expected return following this plan is given by 19 while the best possible expected reutnr would have been Thus the regret in following 5 is given by which may easily be verified to be the same as the risk function (defined as the expected loss plus expected cost) associated with the pX'ocedure 0 for our problem with modified loss and absolute deviation cost. (d) a priori distributions: Although it is theoretically possi- ble to carry out a Bayesian analysis for any a priori distribution" it has become quite customary for the sake of mathematical convenience to deal with only those a priori distributions that are closed under sampling £42.7" which have alternatively been christened as natural conjugate priors (NaP) by Raiffa and Schlaifer £3'2.7. In most circum- stances praOl' knowledge of the experimenter" which is to be reflected in the prior distribution" is rough and can adequately be quantified with the help of a particular member of any reasonably large family of a priori distributions. The natural conjugate prior distribution of of Pl and P2 is a product of two beta distributions. .. This speci- fication may be satisfactory when a priori it is reasonable to con- 20 sider Pl and P2 to be independent. If'" however" the experimenter has some prior reasons to believe that the curative potentialities of' the two drugs go together (which might be due to the fact that the ingredients of the two drugs are to some extent similar), then one should select some joint distribution of' Pl and P2 with a positive correlation between them. As this will frequently be the case" a natural conjugate prior is ruled out for the model as it stands. We shall instead convert the problem to another one for which it is possible to specify a natural conjugate family of prior distributions with correlations between Pl and P2. The a priori distribution with which we shall be mostly concerned in our analysis in Chapter II and III will thus be specified at the end of the next subsection in which we describe the intepded reduction of the original problem. 1.5 Reduction of the original problem. In so far as we are interested only in the difference P1 - P2 Q and not in the actual values of Pl and P2 (the specification of loss and cost in the previous section involving P1 and P2 ~) only through the problem would have been a lot easier, if, like the corresponding normal problem, there existed a sufficient statistic Y whose distribution involved only Q. We can, however, represent Q as the difference of the probabilities of the two types of untied observations. Moreover, due to the symmetrical footing on which the two drugs stand to each other" it would seem that the two types of tied observations contain little or no discriminatory information about Q = Pl-P2' 21 and, therefore, may be coalesced into a single group with little loss of efficiency. The probability of a tied pair is thus We are thus led to consider the statistic the difference of the two Bernoulli random variables. Z has the folloW1ng~al distribution: ifZ=q (1.4) = ~o p~=g = ~l ~i ~ 0, i = 0,1,2 ~o ~=~ = + ~l + ~2 = 1 . ~2 The specifications about the actions, losses and costs per observation described in the previous section remain the same for this modified problem if one interprets ~ not as Pl - P2 but as ~l - ~2· We have thus reduced the original problem of comparing two binomial parameters into a problem of comparing two cell probabilities in a certain trinomial model. The fact that for the corresponding problem of comparing two normal means it is sufficient to restrict attention to the difference Y = Xl - X2 of the two observations by appealing to the principle of invariance and sufficiency L:2~7, and the fact that the binomial problem is approximated by the normal problem at least for large samples, lend some additional justification for the present reduction. The main argu- 22 ment for this modification is, however, the resulting mathematical simplicity engendered out of the existence of a convenient family of a priori distributions that is NCP for the trinomial model. This is defined below as: ~ priori distribution: a,b,c positive integers This has been called the (bivariate) Dirichlet distribution £41, p. 1717. This family possesses certain properties that make it as tractable mathematically as the beta family. Some of these properties are re- corded in the subsection below for future reference. 1.5.1 Some Properties of the Dirichlet distribution. (a) The marginal distributions of the Dirichlet distribution are beta distributions • Explicitly, (b) given by The means, variarices and covariance of "'1 and "'2 are 2; e (1.5a) E("'l) (1.5b) E(:I12 ) = a = a + b + c b a + b + c , V("'l) = , V(:I12 ) = a(b+c) 2 (a+b+c) (a+b+c+l) . 1 . b(a+c) (a+b+c)2(a+b+c+l) 1 ab (a+b+c)2 (a+b+c+l) From these it follows that a - b a +b + c = (1.6) (c) , 4 ab + ac + bc (a+b+c) 2 (a+b+c+l) The family of Dirichlet distributions form a natural conjugate family to the trinomial distribution, so that the posterior distribution of "'1' "'2 after observing n independent observations each following a trinomial distribution, is another Dirichlet distribution with new parameters. Specifically, if the a priori distribution is d ~("'l' "'2!a o' b o' co) and if out of the n observations a,b,c(a+b+c = n) belong to the three cells With probability , 1 1 - "'1 -:112 respectively, then the a posteriori distribution is :11 :11 2 , d S(:I1 l , "'21 a o + a, b o + b, Co + c). 1.5.2 Reduction of prior information. The modification of the original two-binomial problem into the trinomial problem gives rise to the technical problem of converting the prior information about Pl and P2 into that about :11 1 and :11 2• 24 Although a p~ise analytical transformation of this information is in- convenient mathematically (in fact, the transformation from Pl P2 to ~l' ~2 is not one to one, but one to two at most), any such sophisti- cated treatment may not be necessary since a priori information is usually of a rough nature. A few lower order moments of l and ~2 are sufficient to fit a particular Dirichlet distribution. Now it 1C might be easier on the part of the experimenter to form some ideas about the moments of Pl and P2 rather than about ~l and ~2 from previous experience~ In such situations the following relations might be helpful; these relations give the means and variances of ~l and of Pl ~2 in terms of those of Pl and P2 assuming independence and P2 • i = 1, 2 • It may be interesting to know, in particular, what a uniform distribution of Pl and P2' reflecting little or no a priori knOWledge about Pl ' P2' means in terms of ~l' ~2 and vice versa. A rough calculation using the above formulae shows that a uniform prior distribution of Pl ' P2 over the square fO, };7 x fO, };7 (1.e., assuming P and P independent and unifom.) corresponds approxi.. l 2 mately to the Dirichlet distribution d ~ (2 / 2,5). 25 If the experimenter feels that Pl and P2 are not independent, then the following formulae might be usaftLl. E(~i) = E(Pi)~l - E(p~_i17 V(~i) ~ where V(Pi)£l - - P~V(Pi) V(p~_i17l/2 E(p~_i172 + V(P~_i) , 2 E (Pi) P is the correlation coeff!cient between Pl and P2. Using the above formulae it may be verified that a joint distribution of Pl and P2 with uniform marginals and a correlation P = 1/2 d s(4,4,11). corresponds approximately to the Dirichlet distribution 1.6 A decision-theoretic formulation for the binomial problem. In this section we write down, for the sake of completeness, a corresponding decision-theoretic formulation for the problem involving a single binomial parameter, p. dent interest of its own. The binomial problem has an indepen- In fact, the problem with linear loss struc- ture and constant cost has already been considered by Moriguti and Robbins ~~2.7. Moreover, this might serve as a statistical formulation for the original problem when it is reduced, as shown in page problem about a single binomial parameter, neglecting the ties. 6 to the This model also arises with continuous responses, such as temperature or blood pressure reading, to the two treatments in the following way. Instead of discretizing the continuous responses as lying above or below a certain level as mentioned in page 2, we consider the original ~eeponses 26 as such. Suppose specifically that Xl and X are distributed with 2 absolutely continuous distribution functions F and F respectively. 2 l Suppose also that the experimenter does Dot wish to make any assumption about any specific parametric form, say normal, exponential etc. for F1 and F2' and 'Wants to test whether Xl is larger than X2 in some probability sense (e.g., stochastically larger L 27, p. 7'J.7); then the present formulation can be applied to the following binomial parameter ," = PLXl > x2_7 = l F2 p dF1 ./ The various elements of a decision-theoretic formulation are specified below: model: actions: PLX=]}=p, a l PLX=9]=l-p, 05p5 l • preferable if p > 1/2 , a 2 preferable if p costs per observation: (i) 5 1/2 • C(p) = 1, all p: (ii) C(p) = Ip-l/2/: constant cost absolute deviation cost. losses: I. linear loss structure: L(p,a l ) = i 2A Ip-l/21 , if p ~ 1/2 i, 0 .:. '-- (1.9) L(p,a 2 ) = ( 0 J '\\ 2A(p-l/2), , if P > 1/2 , if p ~1/2 , if P > 1/2 , 27 II. modified loss structure: L(p,al;n) 2(A-n) Ip-l/2!, if p ~ 1/2, n < A, = ) 0 , 0 , if P > 1/2, n = 0,1,2 ••• 0 , if P ~ 1/2, n = 0,1,2, ••• , ,0 , if P > 1/2, n ~ l (1.10) L(p,a 2;n) =5 if P ~ 1/2, n ~ A, A, l2(A-n)(p-l/2), if p > 1/2, n < A. a priori distribution: (1.11) where a,b are positive integers. We shall use the term binomial problem to denote the decision problem formulated above. In Chapter II we shall consider a slightly more general situation, viz., testing p < P - P o 0 against p > P , where 0 is an arbitrary number 0 < P < 1. The above formulation corres0 ponds to the special case Po = 1/2. The corresponding decision problem concerning the trinomial model will be called the trinomial problem. Another important reason for introducing the binomial problem in this thesis is the fact that, being mathematically simpler to solve it constitutes a first step towards the solution of the trinomial problem. In fact the analyses for the two problems are so· analogous that in the remaining chapters only the 28 binomial problem is treated in details. The results for the trinomial problem are just stated omitting many of the details. 1.7 Review of some earlier relevant work. We have already mentioned Anscombe ~~7 which contributes rather heavily towards the formulation of our basic problem as stated in section 1.4. Anscombe considers the corresponding normal problem and gives an easily computable outer bound on the optimum boundary. We are, however, concerned with the exact optimum boundary for our prob.. lema. We have also mentioned Moriguti and Robbins ~3!2.7 in section 1.6 in connection with the binomial problem. only linear loss, constant cost and Po Specifically, they consider = 1/2. From rather elementary considerations they show the existence of the Bayes procedure for their problem and describe it constructively. They have also given the numerical values of the optimum boundary for certain values of A. Next, they give a heuristic treatment (which we shall adapt for our generalizations) of the limiting behavior of the optimal solution under two different types of limiting tendencies. For the second type of limiting tendency that turned out to fit the observed numerical results better than the first, they reduced the problem of finding the optimal solution to that of solving a free boundary value problem, thus arriving at the same mathematical problem as Chernoff ~l!=.7 and Lindley ~2§l arrived at for the normal problem. But they gave i'or the first time a formal series expansion solution of the free boundary value problem (in the permissible range) which has later been shown to be 29 an asymptotic expansion by Breakwell and Chernoff f~7. Moriguti and Robbins also considered some alternative ad-hoc sampling rules and compared them with the optimal one at least in some restricted sense. Finally, they presented some charts showing the CC curves and ASN curves for the optimal sampling rule for some representative values of the relevant parameters involved. It will be seen later that the limiting behaVior of the optimal solution for the binomial problem reduces to that of the optimal solution for the normal problem under appropriate limiting tendencies. For this reason, results connected with the latter problem are relevant for our purpose. -e Bather 18_7 in Towards this we mention an important contribution of outlining some methods of obtaining certain bounds on the optimal solution. These bounds, of course, have nothing to do with the outer boundary of Anscombe, mentioned earlier in this section, for the normal problem, and the possible inner boundary that might be obtained from the modified Bayes rule developed by Amster .£)}. The works reviewed above are directly relevant for our speCific problem. It is not intended here to review any of the rapidly g:t'mdng volume of works connected with the general theory of sequential analysis. An excellent summary of the current state of affairs in this field is given in £217 by Johnson. 1.8 Chapter by chapter summary. Chapter I. The basic problem of double dichotomies is stated. Existing methods of analysing such data within the framework of NeymanPearson theory are described briefly along with their main defects and their inappropriateness in the present context. Following Anscombe £5.7, 30 the problem is then formulated as a decision-theoretic one vdthin the framework of Wald theory involving more than one realistic loss and cost function. The problem thus formulated is then modified, for reasons that are explained, to one inv.olving the difference of two cell-probabilities in a certain trinomial, distribution, called the trinomial problem. The corresponding binomial problem is also formu- lated. Chapter II. The general theory of statistical decision as given in Blackwell and Girshick ~27 is applied to show that the Bayes procedures for the various decision problems (with linear loss) are truncated with probability one. It is also described here how to obtain the Bayes sampling rule and hence the optimum (Bayes) bcundary constructively, -e for both the binomial and trinomial problems (linear loss structure and both constant cost and absolute deviation cost). Chapter IIf. The usual characterization of a Bayes procedure in the general set up considered in section 2.1 is eA~ended in section 3.1 to take account of any "stage-dependent" loss function. These results are then specialized to obtain some simple recursion relations characteriZing the optimal procedures for the various decision problems with modified loss and absolute deviation cost. These procedures are also shown to be necessarily truncated at a known stage. Chapter IV. The limiting behavior in large samples of the optimum boundaries for the decision problems studied in Chapter II is considered. Extending Morigut:i.. and Robbins' ~3g7 approach the problem of finding the normalized optimal boundaries with appropriate normalizations is 31 reduced to that of solving certain partial differential equations free boundary value problems. A series expansion for a certain por- tion of the optimum boundary is then obtained for each problem. Ex. pansions which enable the computation of the average risk are also given. Chapter V. Chapter IV An indirect verification of the series expansions of is offered with the help of certain upper and lawer bounds on the optimum boundary obtained by applying same methods of Bather -e f§.7. • e e C\I "'"' A SCHEMATIC CLASSIFICATION OF THE RESUTIrS BY SECTIONS optimum boundary --~-~-. ~l Exact ~~tie asymptotic i Asymptotic I --'--r---------··---- - - - - - Model binomial linear loss imodified loss linear loss linear loss ; iconstantiabsolute constant absolute cone'Gant 'absmr1.nte' _f absolute cost deviation coat deviation! deviation I cost !deviation 1 cost cost cost I i cost I 2.~.2.6 Mortguti : and Robbins trinomial 2.7-2.11 I 2.2-2.6 ,i , I 3.3 I i i 1 j : 4.1-4.2 /4.1-4.2 I : Moriguti and Robbins j ; I 2.7- 2•11 1 j : 3.4 5.1 Bather 4.3-4.4 /4.3-4.4 l ! I r I 5.2 I : 5·2 CHAPTER II BAYES PROCEDURES WITH LINEAR LOSS STRUCTURE 2.0 ~ry. In section 2.1 we collect together some known results of the general theory of statistical decisions that are applied with increasing specializations in the subsequent sections. In section 2.2 a binomial model with linear loss and a general Po duced. Sufficient conditions are obtained for a Bayes procedure to be truncated. -e is intro- In section 2.3, a beta prior distribution is introduced and the sufficient condition of section 2.2 is shown to be satisfied. In section 2.4 the point of truncation of a Bayes procedure is definedj it is also show how to determine it. Section 2.5 gives a description of how to obtain the optimal boundary from the point of truncation by working backwards step by step. Section 2.6 contains some points useful for computation of the optimal boundary as described in section 2.5. Sections 2.7-2.11 treat the trinomial problem and correspond in the numerical order with the sections 2.2-2.6. 2.1 Some well-known results about Bayes sequential procedures with a general model. In this section, we collect together some know results 12.7 that will be useful in obtaining the Bayes sequential procedures for the binomial and the trinomial model with linear loss structure. Notations: {X \ : i sequence of independent and identically distributed i=1,2, ••• random variables. fQ(~l) probability density function of the random variable : with respect to some meaS\11J:!e Xl ~) = ~ Q): a , a 1 2 : L(Q, a ),: i i=1,2 C(Q) space of states of nature. two possible actions or terminal decisions. losses. cost per observation. g C(~) IJ.. : a priori distribution (on a a.fie1d of subsets of _ (C(Q) d ~(Q) • a ) d S (Q), ../ (d) L(I , a i po(~) ::: f~{X) ~ ); ~L(g, i o min i=l, 2 i = 1, 2 • L(~, a i ) . J'fg(X) d ~ (Q) . €> ~j ::: (Xl' x2 ' .... , xj ), j = 1, 2,... • j fg,j :: fQ,j(~j)!:\' ft;,j :; fg,j(~j):: 11' fQ{xi ), j = 1, 2, . . . . ~(Q), j = 1, 2, •••• i=l SfQ,j d ® j = 1, 2, ••• 0 ). 35 N sample size (random variable) using a specified decision function. 0 I ,." • ,,!~"y>,J=, j) t. ~j ~ *j(2:j)' j 1 ; ... : a sampling rule, where = 0, 1, ... is the probability that N = j, that is, the probability of termination after observing xl' ••• ,x l' ¢ = ~,¢j -e = 0, I, j : ¢j;: ¢.(x.), j = 0, 1, a terminal decision rule, where a 2 after observing xl"",xj ~n' n • is the probability of taking action J -J 8·~ (~, j conditional upon N =j • ¢): a sequential decision function. = 0,1,2, ••• : class of all sequential decision functions truncated at N = n, i.e., with *o + VI + ••• + *n = 1 identically in x -n' co u ~ n=l n ~l : class of all truncated sequential decision functions. class of all sequential decision functions which terminate almost certainly, i.e. for which E j=o Vj = I, a.e. for all ~ € G class of all sequential decision functions subject to usual ~27 measurability conditions. 36 0e: =. (*s' r(~, ¢~): Bayes decision function in a specified class average risk associated with 5. 0) min r(~" 0)" n P (S):: n o€ b. = 0,,1,,2,..... Bayes risk in b. n • n (this notation is consistent with earlier one, p(s);: b.. min o€ b. 2 r(g" 5): Bayes risk in !J. 2 po(~». • We state the results in the form of the following lemmas. Lemma 2.1. b. A Bayes terminal decision rule for for any n n or r -e ¢g,j = < I or j where u Lemma 2.2. = 0,1,2, ••• n = 0,1,2, ••• is arbitrary If Os , = ( \~s" = L(~j' a l ) < ff = if ff = L(Sj' if b.=b. n , b. = /),k , ~ u ~ ¢g) b. is 2, satisfies It if (0 _or if u II j pO(~j) if ! 0 (2.1) = 0,,1 !J.k for any r b., where 0 € L(~j' a 2 ) , = ff a 2 ) < L(~j' a l ) n = 0,1,2, ••• , k = 0,1 , , or 2; 1). is a Bayes decision function in /),1, then and, after observing Xl' ••• ' x j if ' sampling is ter.minated if and only 37 then limpn(t;) n _> co = peS) • Notations. n> 1 vm, wm defined recursively for m = m, n-1, ••• ,1 by (V'n = , ( 2 .4») w \ m \"_ vm Lemma 2.4. = min = , (u , v ) m m m = n, n-1, ••• ,lj 1)' m=n-l,n-2, ••• ,1 • )(wm+1 (x, -m xm+1 ) dll(x r- m+ a such that for , a u -(2.6) j If there exists a least positive integer n every n> n - un n-1 < v n- l' for all -nxl' then ... Notations. i = 1,2 • 3$ (2.8) t\ /\. 1 1\ Lemma 2.5. !2 /\ "=. A1 U /\ 2: complement of and /\2 stopping set • 1\ continuation set are convex, i.e., if 1;1 ~ 1;2 belong S ~ Sl and ;2' i' then any convex linear combination namely belongs to Note. -e /\., i ~ = 1, 2. 1 of £~7. Lemma 2.3 follows from Theorem All other lemmas are essentially contained in Blackwell and Girshick I s book 2.2 £27. Binomial model. In this section we apply the results of the previous section with the following specializations and call the decision problem simplY the binomial problem. (2.10) f (1) P a a where Po f (0) = I-p,; P is preferable l if is preferable if 2 = j 2A(Po-p) 1.... (2.11) - 0 ,f' L(p,a ) 2 = {2A(P:P o ) O~p~l • p> Po ' p < P is an arbitrary constant, L(p,a ) l - = p, O<p • 0 <1. 0 , if p<p , if p>p , , - 0 0 , • if p < p , - if p>p 0 0 • 39 It follows from the above specializations that ( rl (I;) r 2(~') = (2.12) , 1. , PO) i g I EgP PO; ,. ( -' "- f"lo(g) = Po ( I Eg P -> ~ s I E~P -< 't~ = = and ji A mglp-pol - (2.l3) po{S) = l A 11 Ip-pol - A E~(P-Po) , for g € rl ' A Eg(Po-P) , for g e; r2 ' for S € 110 A EsJp-pol , ; which may be written compactly as (2.l4) ¢s,j p o = A E~Jp-p J ~ 0 (g) of lemma 2.1 and peg) AIE~(p-p)1 ~ 0 of lemma 2.2 (equations (2.l) and (2.2» reduce to (2.15) ¢t::. ,J. =~ j , 0 if Eg p>p j 0 , u , if " =p 0 , l_ 1 , if " <p , = O,l,2, ••• ,n; 0 n =0,1,2, ••• . (2.16) where J .1 (2.17) fg{i) = r i .~ is given by 1 i P (1 .. p) - (2.14) • d g(p), i = 0, 1 • 40 We develop below a sufficient condition for a Bayes procedure for the binomial problem to be truncated with probability one, 1. e. , 6g in coincides with t::,0 81; in b,2. r361 S being absolutely continuous, Sobel Os sequential procedure in t::, 2 Under the assumption of proves that the Bayes is truncated with probability one for a general set up that includes ours with constant cost. His method of proof did not allow extension for a cost depending on the parameter; hence the necessity of the following theorem. Theorem 2.1. Os -e A sufficient condition for a Bayes sequential procedure for the binomial problem to be truncated with probability one is: (2.18) and (a) (2.19) (b) t C(p) > 03 S var(p J J:im n -> = 1 sn)1 C(Sn) = 0 a.e. co uniformly in n. Moreover, o~ ~ takes at most n 0 observations, where n the smallest positive integer such that for every n > n - is - 1 a.e. (2.20) Note. 0 0 Condition (a) is introduced in order that lemma 2.3 It is trivially sat1.elfied for constant cost for any deviation cost, viz., lutely continuous tribution (1.11) • C(p) = Ip-pof, ~. holds. For absolute it is satisfied for any abso- S and hence in particular when S is a beta dis- 41 Proof: Because of lemma 2.3 and lemma 2.4, it is sufficient to show that there exists a least positive integer n n > - n 0 o such that for every , (2.21) un-1 -< vn- 1 ' for all -nxl· Writing explicitly the definitions of uland v n- a n- l ' (2.3 ) and (2.4), (2.21) reduces to (2.22) < J.r i.e., for all If; -e po(gn) + n C(gn>-7 fs,n dil (Xn ), for all n- 1 ~n-l' for a fixed ; • Writing (2.23 ) for convenience, (2.22) reduces under the present set up, to (2.24) Now from the definitions Also, expectation being a linear operation, it follows from (2.25) that for any integrable g(p) , (2.26) 42 It follows from (2.13) and (2.26) by 1dentifying g with Sn_l' ~(Sn_l) that is identically zero for all those ;n-l such that (Sn-l)(~)(Sn_l)(O) all belong to the same r i , i = 1 or 2. 1 1 (2.24) is then satisfied for all these gland for any non-negative Sn_l' n- cost C(p) • Each of the remaining S lt s belong to one and only one of the nfollowing three mutuallY disjoint subsets: , -e .D2 ={~ I E\; \ r.11 0 ' ~(s) • < € = € € 2 • r 2 ' so that it follows A fg(l) Es(l) (p-po) + A fg(O) Eg(O) (po-p) 1 (2.28) E~il)P }Cr < Po r 1 and E;l(0) from (2.23), (2.13) and (2.26) that If t; (1) then Sl P 1 ( 1 A ) (p-po)pd r;(p) + A ) ~o = 2A Var ( 1 (po-p)(l-p) d ~(p) 0 (p j g) remembering that p If S € = EgP for !; .r)l ' then sil o € )€ r) ~-o r 1 and . siC) follows from (2.23), (2.13) and (2.26) that € r 2 ' so it again A.(s) = 2A fg(O) = 2A f(PO-P)(l-P) d g (p) EgiO~po-p) c = 2A LEg (l-p) < 2A = 2A Var (p 2 - (I-Po) Eg(1-Pt7 2 rE (1_p)2 - E (l-p) 7 s- g Is) . S It can be show similarly that for € 1.21.' (2.30) -e < Hence 2A Var (pis) • (2.21) is satisfied if there eXists an no for every n > n - 0 such that , Var (P(Sn_l) C(Sn_l) < - 1 2A a.e. for which a sufficient condition is (2.19) • The second part of the theorem follows immediately from (2.31). The proof of the theorem is thus complete. Remarks: (1) It is to be noted that no defined in (2.20) de- pends on the particular a priori distribution S , while the sets ..lr-). _~ , i = 0,1,2 (2.27) are defined independently of it • 44 (2) It follows from the proof above that for the conclusion of the theorem to remain valid it is sufficient that (2.l9) hold good only for those Sn which belong to 2.3 1-2:: 1-20 U ..r21 LJ 1-22 • Beta a priori distribution. From now onwards we shall consider an a priori distribution given by rea +b ) a -1 b -1 o 0 P 0 (l-p) 0 dp, r(ao)r(b o ) (2.32) where a" b0 are positive integers. o It is well-known £42,7 that the beta family of distributions is ··e closed under sampling. £3~7" In the terminology of Raiffa and Schlaifer (2.32) forms a natural conjugate prior for the model binomial distribution. In other words, if a n and b denote the number of n successes and failures respectively in a sample of size n, the a posteriori distribution is given byd Sn (p) = d ~(a . 0 + an , bo+n b), n = l,2, ••• , Thus the set of all possible a posteriori distributions that can arise from the a priori distribution (2.32) can be represented by the set of all lattice points in a positive quadrant with origin at (a o,b 0 ). It is well-known £27 that the Bayes decision-function can be completely characterized by the sets (2.9). 1\ l' A 2' 1\ defined in (2.$), In this special case, these sets that are associated with (2.32) can be represented as certain sets in the space of lattice points mentioned above. Without any risk of confusion we shall denote these various (a,b) sets by the same notations as the corresponding S sets. The optimum boundary (for the decision problem) that will be mentioned henceforth quite often, is formally defined (unlike the terminology in Moriguti and Robbins ~;Q7) as the set of points (a,b) in the stopping set 1\ such that (a-l,b) ~ (a,b-l) is in the con- tinuation set /\ • Due to the closedness property of the beta family mentioned above, it is not necessary to determine the optimal boundary along with the various sets separately. 1\ l' 1\ 2' A for each a priori distribution It is possible to describe them in terms of a fixed origin say (0,0), and to obtain them for a particular a priori distribution, like (2.;2), we need only to shift the origin to (a , b). . 0 to considering the a priori distribution (2.;2) with a sample of n -2; a + b -2 000 ~s 0 It amounts being arrived at observations with a -1 successes 0 and b -1 failures, starting with a uniform prior. o In the following analysis the terms optimum boundary, stopping sets, continuation sets, etc. are defined with respect to this fixed frame of reference. d g(a,b) Accordingly, by d Sn(p), unlike previously, we shall mean where a + b = n. I"t is obvious "that any sequential pro- d s(a o' b o ) and is "truncated at no (in "the sense that it takes at most no observations before it tells one cedures Which is Bayes for to stop), is also trunca"ted at n + a + b 000 (in "the above sense) when referred to this fixed frame of reference. Since any a priori beta distribution can be represented by d s(a o' b o ) where ao,b o are finite, the condition (2.19) for a Bayes procedure to be "truncated remains 46 valid when g n is given the new interpretation above. We apply this condition in this new light to arrive at the following corollary to Theorem 2.1. Corollary 2.1. A Bayes sequential procedure for the binomial problem with a beta prior distribution is truncated with probability one. Proof: As noted before, condition (a) of Theorem 2.1 is satisfied for the particular cases we are concerned with here. Now Var (pla,b) Case (i). = ab (a+b)2 (a+b+l) Constant cost, i.e. c(a,b) = 1 for all (a,b) Var p a,b) C a,b a + b -> uniformly as <Xl • Condition (2.19) is thus ensured in this case. 1 Case (ii). Absolute deviation cost, i.e., c(a,b) = )/lp-pofd;(a,b): "0 As noted in remark (2) after Theorem 2.1, we need only verify the condition (2.l9) for those S belonging to .r:)., -J. = 0,1,2 which i may be written in this special case as follows: (2.36) ..C-0 ) = .) ,_ (a, b) (2.37) 1-)-1 = ~ (a, b) L a a +b a a+b+l "I = Po 5 < p0 < , a:b ~ J ) (a b) f l ' L Also, it is sufficient to show that as (a+b) - > C(a.,b) }. a+1 Po a+b+1 (2.19) holds only asymptotically Now co. = < a a + b I a-1 ( loop )b-1 dp rP-Po I p B(a,b) (2.39) J1 I B(p p (a+b)-l loop (a+b).. l Ip-plp 0 (l-p) 0 dp ° a+'b; loop a+6) 000 for (a,b) function. € 1-1i , i = 0,1,2, where B(a,b) is the well-known beta Also, the right hand side of (2.39) is asymptotically £15, p. 255,7 equal to (2.40) Thus (2.41) uniformly as a + b > co , concluding the proof of the corollary. 2.4 Determination of the point of truncation in the binomial problem. The set .1(-) has been called, for obvious reasons, the neutral -0 boundary by Weatherill £42.7. We shall call boundary for reasons to be explained below. O + b O is smallest and no point (a,b) the extended neutral The point ..rl, ~ of ..rl ~ then formally defined to be that point of a 1-1 .2! truncation is o (ao,b ) such that a+b> a O + b O is tt 48 ~ continuation point. The point of truncation, thus defined, mayor may not belong to the optimum boundary defined earlier. belong to the optimum boundary if it is in ~r ) . -0 = 1/2, case, i.e., when Po (-) when p 0 = 1/2). In the symmetric the point of truncation may alternatively be defined as the first stopping point on .1 -0 It does not 1-1 (which is the same as Here by ttfirst" we mean "with smallest sum of co-ordinates" • The determination of the point of truncation is important in the sense that, once it is known, it is possible to obtain the optimum boundary, as will be described afterwards, by working backwards from this point with the help of the recursion formula (2.16). Before we proceed further, we note the following facts about the 1-1i sets ' i = 0,1,2. "{-10 is empty if Po is irrational. If Po is rational, in which case it can be written as a pure fraction, say, (1) where ra and r b are mutually prime to each other, then the sequence of points {rak' rbk} = {Sk~ , say, belong to that the farther the ratio in ..c-). -0 points (2) ra/rb In the special case p 0 1-10 • It is obVious is from unity, the fewer the points = 1/2, -C-)-0 consists of all the (a,a ) on the diagonal a=b. 1.h and 1-12 are empty when Po = 1/2. They have been primarily introduced for the situation p .,. 1/2. They represent o points near but not exactly on it, on the two sides of it, 110 1-l i being on the side of /\ i' i (-) , ..l -0 2. that a failure (success) at the point leads f\ the resulting posterior distribution to (3) Let Ln+1 situations, case Po (-) -- n =nJ Then is intersects (-) " L n 1-1 (\ L n 2( Al ) • denote the set of lattice points - - is not empty and = 1/2, empty if = t(a,b)fa+b a + b = n. on the line 1-1 () L n In fact, a point in /\ 1( 1\2)' is so near to the neu- 1-1 (f·'12), though belonging to 1 tral boundary, = 1, is empty if and only if Ln+1 C 1-10 In all other , in exactly one point. 1-1 =1-10 , it odd while L n (-2 n - since follows that In the special Ln () 1-1 is is the point (n/2, n/2) if n is even. -e (4) of 1-1 1 If Po is given by (2.42), then there are exactly interwoven with exactly tween any two points, say, r -1 points of b (rak, rbk) and 1-12 in (rak+l, l'bk+l ) ra-l points 1-1 of be- 1-1 0 • The results of the previous section ensure that a Bayes seguential procedure is truncated. nO This means that there exists an integer sufficiently large such that the sampling region (or the continua- tion set) lies below the straight line L o. Using the above facts, the n region of sampling can be narrowed down further as stated in the following lemma. Lemma 2.6. If the Bayes sequential procedure· for the binomial problem with a ~eta prior distribution is known to be truncated at every point (a, b) with a+b > nO tinuation set A is nO (Le., is a stopping point), then the con- contained within a rectangle (with a corner re- moved in the first two cases below): 50 (i) If the line a+b 1-l1 , (ao,bo) of (a °"b °) ~ 1-12 = bO + 1, and n. = nO (aO, b 0) with If the line a+b::;: nO interse;ts 1-2 at a point, say = bO a::;: a °+ 1" b (a°+1, b O-1). - and the intersects ~1 at a point, say, '-), then !\ is bounded by (a 0, bO) of . i -o - s=a ° ,b=b° If the line a+b::;: nO ..2! 1-10 ' 1-2, does not intersect arises when the line a+b::;: nO+l intersects (ao,bo) at a point, say ' then /\ is bounded by diagonal joining (iii) 1-2 (ao.bo) ~~th (0 , a - 1 , bO+l) • If the line a + b (ii) intersects ~I\ is bounded by a::;: a O , b +h . .. v e diagona 1 Jo~n~ng (iv) = nO 1-1 which case at a point, say ~ 1\ is bounded by , The proof of this long lemma is simple and follovm readily from lemma 2.5 and the above facts (1), (2) and (3). The above lemma is useful in locating the point of truncation. it is known that all the points on a certain line points, then so are all those on the line in 1-1, if any. L n are stopping L _ except perhaps the one n l 1-1 is empty, we move on n In other words, if If L _l n to the next line L _ ; if, on the other hand, L _l 1-1 is not n n 2 empty, in 1V'hich case it consists of just one point belonging to any of n the , {-).t .l _~ s i = 0,1,2, according to (3) above, then it needs to be verified vlhether it is a stopping point 01' a continuation point. it is a stopping point, we move on to the next line, and proceed If 51 analogously till we reach the first continuation point on 1-2 • The next theorem gives an easily applicable criterion by which it is possible to verify whether the point S(a, b ) in (-) n __ L_ n l is a stopping point or a continuation point. Theorem 2.2 • (a+l,b), ..!!. s(a,b) is a point on i-2 such that gil),~, ~ giO) ,..!..&.o, (a,b+l), are stopping points, tllim. s(a,b) is a stopping point or a continuation point according as "'(I; ) < e(g) (2.43) - or ",(g) > e(g) , where (2.44) Proof. "'(s) 's) ( 2A =t 2A ~Es(1-P)2 - (l-po) E (1-p27 , if 2A ~EI; p2 - Po EgP_7 Var (p 1 s if g s€ , if g € 1-20 1-21 ' 1-22 • Since the B&yes procedure is known to be truncated, the hypo- thesis of lemma 2.2 is satisfied so that (2.16) holds. Subtracting po(~) from both sides of (2.16), we get where € ",(~) is defined as in (2.23) • . Since sill and giO)are stopping points, by hypothesis, 52 Hence, (2.45) reduces to (2.46) p(g) • po(s) = min fO, e(g) - A(S t7 Now S is a stopping point or a continuation point according as or i.e., by (2.46), according as A(g) ~ e(s) The expressions for A(S) A(S) or > e(s) • given in (2.44) were already obtained in (2.28), (2.29) and (2.30). The proof of the theorem is complete. We now derive bounds on A(s) = A(a,b) for s = g(a,b). Lemma 2.7 • o -< A(a,b) < 2A We are now in a p (l-p )/(a+b) • 0 pos~tion point of truncation. 0 to determine the exact location of the For simplicity we consider the symmetric case first. Special case: p o As noted earlier, consists of points = 1/2. 1-11 (a,a). and 1-12 are empty in this case; ,,(-1 0 It follows from the procedure indicated above and Theorem 2.2 that the point of truncation is given by (ao,a o ) where a o is the smallest positive integer such that No'" for constant cost, e = 1 , while for abs'olUte deviation cost 53 1 C( a"a ) 5I = p - ~I dS (a" a) ° 1 = '2 (2a)! a! a! where b (a"a) denotes the central term of the symmetrical binomial i 1 distribution with parameters (2a l' "§) • is Thus it follows from (2.51)" (2.52) that the truncation point (ao" aO) where a O is the smallest positive integer such that -12 ( -A 2 - 1) (2.53) , for the case of constant cost; and such that for the case of absolute deviation cost. Standard binomial tables O may be used to obtain a from (2.54). For large values of A" we may use instead the asymptotic relation (a o, aO) is the point of truncation, then the O (a"a) ~ a < a are continuation points. Corollary 2.2. points ~bis If follows readily fram trJe deftnitlon of tbe point c..ation," the. mcnotoIaUally ~.spect· to d.ecr~s:j.ng $l:tope~ty' a~GJ,nd lJlheorein2.2~ ,e,specially bering t]p.q5t .p:E.g0 f or any point (a"b) • of' o~ 'trun- ~(I;l.ia}/e(a"a.) wirth the equat10n (2?45) remiarn- General case. O<p ° <1 It follows fram (2.43) and (2.47) that any point (a,b) with 2A P > a + b (l-p ) ° C(a,b) is a stopping point. smallest a+b Thus, if ° (a', bf be the point of ) satisfying (2.56) and if (ao,bo) 1-1 with be the point of truncation, then Consider the points of starting from (a' I b f ) • 1-1 in a sequence with decreasing a+b, Hith the help of the criterion developed in Theorem 2.2 we may then search for (ao,bo), as described already, along this sequence. special case: ( ~ j \" constant cost Po = ra ra + r b ' as defined in (2.42) In this special situation we do not have to go beyond the point (a~,bl) say, along this sequence, where k ° being the largest positive integer k for which P (l-p ) > A(r k, rbk);: • a era+rb ) k+l In fact, it may be shown in this special situation that the ° point of truncation, ° (ao,bo) is one of the (ra + r b - 1) points of 55 ,-1 ~ ° ° between the two successive points (ra(k +1) , rb(k +1» (ra kO, rbko) of '{-19 including the first. This follows since from the definition (2.58) of kO, (r kO, r kO) b a point'VTbile all points and (ra(ko+l), rb(ko+l» is a continuation is a stopping point. Moreover, (a,b) with a + b > (ra + rb)(k O + 1) + 1 - are stopping points. This follows from (2.4;) since for any such point (a, b) , "'(a,b) < 2A Po(l-Po) by lennna 2. a+b < 2A po(l-Po) by (2.59) (r +rb)(ko+l)+l a < 1 by (2.4;), (ra(ko+l), rb(ko+l» being a stopping point. Finally, it may be shown just like in corollary 2.2 that the points (r k, r k) with a b k ~ k O are all continuation points. We cannot assert, however, that all the points of i l l and i 12 for O O which a+b < a + b are also continuation points, since ~(a,b) is not necessarily monotonically decreasing in i = 1,2. a+b for (a, b) ~ il 1 This is the reason for the particular definition of the point of truncation that we chose in the beginning of this section. He expect, however, that most of them will be continuation points because by Lemma 2.7 "'(a,b) is closely bounded above by a function , 56 that is monotonically decreasing in a+b. 2.5 Procedure for obtaining the optimum boundary in the binomial ;problem. In this section, we describe how to obtain the optimum boundary constructively once the point of truncation is known. we first consider the case p Special case: po o For simplicity = 1/2. = 1/2 • The neutral boundary,' ~(-) , consists of points (a,a) in this -0 case., Because of the symmetry of the problem, the optimum boundary will be symmetric with respect to the line a = b. any loss of generality, we consider only a b. Let (ao, a O) be the pomnt of truncation obtained as indicated in the previous section. are stopping points. ~ Hence, without By Lemma 2.6, all the points on Also, fram Corollary 2.2, all the points (a,a) With a < a o are continuation points. Next, the recursion relation (2.16) reduces to where (2.61) and (2.62) C(a,b) =\ J 1 , all (a, b ) l EL /p- ~ la,E7· r , for constant cost ; , for absolute deviation cost. 57 For stopping points p points =p , a known function, and for continuation ° .. p < P , as follows from Lemma 2.2. ° classified as a Thus a point may be stopping point or not according as p = po or p < Po at this point. Consider the points on the line a =aO - (ao-l, aO.l) which is a continuation point. tained with the help of (2.60) since I, starting from p(ao.l,ao-l) (ao, aO-l) may be ob- and (ao-l,ao) are stopping points as noted earlier and hence have known p-values. this Us ing p(aO_l,ao_l), and the known p-values of the stopping points on the line a = a O, the p-values of the points on the line a =aO - 1 may be calculated with the help of (2.60) successively with increasing a . co-ordinates. Thus, these points may be examined for a stopping point or a continuation point one after another. \tie need, however, only continue on this line till a stopping point is reached for the first time. This is due to the fact that all further points on this line are stopping points as can easily be shown using lemma 2.5. The procedure described in the previous paragraph determines the boundary point (i.e., the first stopping point encountered) on the line a next line a = a°- I . =a O - We then consider finding this point on the 2, in the same way, by examining the points on it for a stopping point till we reach one. This time we start with o the point (a - 2, a O • 2) whose p-values can be obtained from (2.60) by those of (ao.l, aO-2) and (a o_2, aO-l) already obtained. This process is then repeated to find the boundary points on the lines a = a ° . k, successively for k = 3,4, etc •• ~he boundary points on 58 the part a b are obtained fran those on a > b by symmetry. ~ General ca.se: 0< p ° < 1 As before for each point (a,b) we associate the Bayes risk p(a,b) which satisfies the same recursion relation as (2.60) where nOiv (2.61) is replaced by (2.6:; ) = A E ~rp-polla'E7 po(a,b) - A I a:b - Po r_7 and (2.62) by for constant costj a,b, (2.64) C(a,b) for absolute deviation cost. In the general case, the optimum boundary may not be symmetric 1-10 : with respect to a = P po/(l-po). vIe have to consider the boundary both above and below this line. of truncation on 1-1 , 1-12 (1): (ao,bo) (ao, bO) belongs to € .rlo By Lemma 2.6 the points on the lines a = a stopping points, hence their p-values are known. (ao, bO-l) 1-1, and .{-10 ' • (ao, bO) next point on be the point obtained as indicated in the previous section. Three different cases arise according as ~1l or Let ° and b = b ° are The p-value of the i.e., (ao-l, bO-l) is obtained fram those of (ao-l, bO). of the points on the line a points on the line a = aO-l Using this p(ao-l,bo-l) and the p-values = a O, in the equation (2.60), those of the may be obtained successively fram the 59 point (ao.l, bO-l) till we reach the first stopping point on this line. Similarly using pea 0 _ 1, b O-1) and the p-values of the points O on the line b = b O, those of the points on the line b = b - 1 Consider the next point on .C1 which may be obtained successively. may be either on the line a =aO • 2 or on the line b = bO - 2. In either case, it should be sufficiently apparent by now how to proceed. (2): B,y (ao, bO) e i-1l • Lemma 2.6, the points on the lines a =a O and b = bO + 1 are stopping points in this case; hence their p-values are known. Proceeding as indicated above, the boundary point on the line b is obtained first. b = b °• 1 (3): ° (ao, bO) = bO - e i-12 1, • 2.6, again, the points on the line a =aO + 1, and ° are stopping points. is obtained first. b = a° - and so on. B,y Lemma b = b Next the boundary points on the lines a = bO The boundary point on the line a = a O Next the process is repeated with lines a = a - k, k, successively with increasing k. 2.6 Some points useful for computation of the optimal boundary. Since the optimum (Bayes) boundary does not depend on the absolute magnitudes of the Bayes risk p, but rather on the relative magnitude of the two quantities on the right hand side of (2.60), we may, for convenience of computation, consider the following translated quantity 60 which may be called the gain function associated with (a,b). Using the fact that the equation (2.60) may be vTritten in terms of (2.65) as (2.67) V(a,b) rV (a,b),+aa b V(a+l,b) + a+bb V(a,b+l) - C(a,b)7 - = max - 0 't'There Following Moriguti and Robbins ["39,7, for actual computation, we may still consider anothe.r quantity defined as M(a,b) = V(a,b) - V (a,b) o which may be called the net gain function associated with can be shown from (2.67) and (a, b). It (2.6 ~ that the optimum boundary may be characterized in terms of M(a, b) as follovTs, (2.70) M(a,b) = max~O, a:b M(a+l,b) + a~b M(a,b+l) + A(a,b)-C(a,b17 where (2A •ab/f(a+b)2(a+b+1).7, ) 2A • b/(a+b) • A(a.,bl.)= {- _. ' '\2/\0 · a/(a+b} if (a,b) € rp~;';.. ''iJ.j(a,+b+i.17, :if ..__ 110 (a,b) ' € {]l ' • f(a+l)/(a+b+l)-po-]' if (a,b) € 1-12 ' , other;vise • 61 We give below a closed form expression for C(a,b) for the abso- lute deviation cost in terms of tabulated functions: C(a,b) = Er /p-p - 1 0 Ila, -b7 S' = lp-p0 Ids a, b(P) o -I r2 I Po(a,b)-17 +a r2 a b - p0 - = where Ix(P,q) p 0 (a+l"b) -17 - is the incomplete beta integral J x 1 B{p,q) t p- l (l_t)q-l dt , p,q > 0 0 tabulated by K. Pearson L3g7, For the special case p o lemma another expression for = 1/2, we give below in the form of a C(a,b) that might be more convenient for computation: Lemma 2.8 C(a, b) -; EL Ip - ~I la'E7 = max (a, b) a+ b where (2.74) g(a,b) = b1(a,b;~) r Ja -bl 7 -·2 - Ia-b/ ( 2j 1:: j=o and and L~7 denotes the largest integer < x • .) g(a,b) 2.7 Trinomial model. In this section we write down without much elaboration the analogous results for the trinomial problem stated below. These results can be obtained similarly as in the binomial problem with (2.76) f (i) ~1'~2 = ~i' = 0,1,2; i 0 ~ ~i ~ 1, ~o + ~l + ~2 is preferable if ~l > ~2 ' a 2 if preferable if ~l < ~2 • al L(~1'~2; a l ) = j 2A(~2 L ~l)' if 1t l ~ ~2 ' , if "ttl > tt 2 0 • It follows from the above specializations that 'l gf rl = f2 = tI 1-)-0 = 19 g Eg ttl > Eg tt2 ~ , ~l < Eg 1t2 ~ , Eg , E 1t g l = Eg 1t2 "'\ S and Also, (2.1) reduces to (2.80) e ¢g,j = {~ , , , if Ego ttl > Eg j J if " = if " < j = 0,1,2, ••• ,n; n = 0,1,2, ••• 1t 2 " , , " ,. = 1. 63 and (2.2) reduces to where Corresponding to Theorem 2.1 we have in this case the following theorem which can be proved analogously. Theorem 2.3. 5 s A sufficient condition for a Bayes sequential procedure for the trinomial Pfoblem to be truncated with probability one is: (2.83) t (a) 5 (b) lim n -> C(1t l ,1t 2 ) > 0 ~ = 1 and (2.84) Var (1t l - 1t 2 / Sn)/ C(Sn) = 0 a.e., (Xl uniformly in n • takes at most no observations, where no is the 5 smallest positive integer such that for every n ~ n - 1, o Moreover, 8 It is possible to show as in the r~ (2) after Theorem 2.1 that it is only necessary to show that condition (b) holds for those 5n E ~-o (-) • 64 2.8 Dirichlet a priori distribution. From now onwards we consider an a priori distribution belonging to the bivariate Dirichlet distribution family (1.5) some of whose properties have already been considered in subsection 1.5.1. From the property of closedness of this family under sampling" all the considerations of the binomial problem hold equally well. The only Change in this case is that now we have to consider all the lattice points of the positive octant in the three-dimensional space. various sets A 1" 1\ 2" /\" f 1" f 2" "{-1 0 The are considered as before, with respect to the fixed frame" of reference with (0,,0,,0) as the origin in the '-space. From (1.5 a"b) and (2 •. ')we may write (2.86) ,. f l =ia"b,c The optimum boundary may be defined in this case as the set of stopping points (a,b,c) such that (a-l"b"c), (a,b-l,c) ~ (a"b,c-l) is a continuation point. As before, by dSn(~1"~2) we shall mean a +b + C ds(a"b,c) where = n. Analogous to corollary 2.1, we have the following corollary: Corollary 2.3. A Bayes sequential procedure for the trinomial problem vdth a Dirichlet prior distribution is truncated with probability one. 65 Outline of proof: Condition (a) of Theorem 2.3 may be shown to be satisfied for the special cases of cost and a priori distributions we are concerned '\-lith here. Now" for S {(-) "it follows from (1.6)" putting a = b .1 -0 Also" it can be easily verified that ;f 1 _ ' ) Ef l7t l - '1t 2 J la,a,;:/ " for constant case for absolute cost: case costl (i) deviation (ii) l= 4a/f(2a+c)bi(a"a17 where bi(a"a) with parameters is the central term in the bino.rrAal distribution (2a, 1/2). diately for constant cost. (2.91) From these results, (2. 4) follows imIDe- For the other cost, we have Var(7t l - '1t2Ia,a,c)/C(a,a,c) = f2(2a V"l + c + 1)bi(a,a17 -1 (7ta)1/2/2(2a + c + 1) -> 0 as 2a + c --> ro • 2.9 Determination of the points of truncation in the trinomial problem. The results of the previous scction ensure that a Bayes procedure for the trinomial problem is truncated. This means that for a suffi- ciently large value of n, all the lattice points of the plane, say Ln : a+b+c=n are stopping points. In other words the continuation set, 66 1\ , lies below this plane. A to narrows down Just like Lermna 2.6, the lemma below s~e subset of a+b+c < n. Let the two planes Land ,-) n .l -0 intersect each other (which they do perpendicularly) along the line, say, f: n eonsider a plane P~ ~n at 45 to the plane ,- .l 2 0 . 1,2. J. = ' r1 in Obviously, passing through n a::: (n-c)/2. 0 n P1 and P2 JJleet each· other perpendicu- fn . larly at . Lemma 2.9. cated at If a Bayes procedure for th~ trin?mial problem is tr~ A is contained within the n n n region R . bounded above by the two p~es P ~ P2 • l Proof: that /\ by a n, then the continuation set a O = (n-co)/2 c -plane, where c ° is such is an integer (positive), the continuation set, (co), in that plane is contained within the sq~e bounded above = a° and bounded on the right by b (ao, aO-l, co) in rl • (ao, aO..l, cO+l) (aO, aO..l, co) all in is also in Consider the point (a °, a °, c 0) (a °+1, a 0-1, c 0) and /''1 (being in Ln ) and hence by Lermna :I /\ 1. (ao-l,ao,co) belongs to "'2. of the plane L .. n 1 on the 2.5 It can similarly be ShovlIl that (ao, aO-l,co)am'({;l,°l'il'-l,aP;;co), are two points co-plane. that all the points of the plane stopping points. = a °. It can be represented as a convex linear combination of the points plane ° The lemma asserts that for each L _ n l It can be shown successively except on ,,{-10 ' if any, are UtiliZing this fact again, all the points on the n L 2 and on or outside R may be shown to be stopping points. n- ContinuilJlg in this way successively we arrive at the above lemma. Formal definition of a point of truncati~ For a fixed c, we may formally define the point of truncation in O the c-plane just as in section 2.4, as that ~0int (ao, a , c) of 1-1, such that a O is the smallest integer such that no poiD~ (a,a,c) ~ ..{ 20 ~~~~plane with a ~ a a is a continuation point. Because of the symmetry of the problem, it will turn out that just as in the symmetric binomial problem (i.e., "lith p cation on the stopping point ° = 1/2) the point of trun.. c-plane might alternatively be defined as the "first" (ao, a O, c) in (-) - meaning with smallest a 0 • -0 on this c-plane, "first fl here It may be noted here that unlike the symmetric binomial case, the points of trunca.tion defined as such do belong to the optimum bouncla.ry as defined in the previous section. In fact, it may easily be seen from the above lemma that they cannot be reached from any point in the same c-plane, but may be reached from the next lower c-plane. We now state how to determine these points of truncation. Just as in the binomial problem, the knOWledge of the:t.;r exact location enables us to obtain the optimum bouncla.ry, as will be described in section 2.10, by working backwards from these points with the belp of the recursion formula (2.81). To find the points of truncation for a Bayes procedure knovffi to be truncated at n, say, we need, because of Lemma 2.9, only the set n L _ 1-20 to determine whether they are stopping points n l or not. A point belonging to Ln _ n 1-20 is of the form (a,a,c) l where 2a + c = n-l. Notice that these points are on c-planes with of points 68 odd (even) c if n is even (odd). A further obs ervation at this point leads to either (a+l,a,c), (a,a+l,c) or (a,a,c+l), each of which is a stopping point. The next theorem gives an easily appli- cable criterion to deter.mine whether or not such a point (a,a,c) is a stopping point. Theorem 2.4. such that then If (a,a,c) ---=--.-;.;.---------------C) is a point on the neutral boundary -0 (a+l,a,c), (a,a+l,c) and (a,a,c+l) are stopping points, (a,a,c) is a stopping point or a continuation point, according as (2.9 2 ) ~(a,a,c) 5 C(a, a,c) or ~(a,a,c) > C(a,a,c) where and The proof of this theorem is analogous to that of Theorem 2.2 and hence omitted. If all the points of L I n n- C)--0 are thus found to be - stopping points, we then restrict our attention to the next smaller n region R - l • By Lenma 2.9, again, we need examine only the points of Ln_2 () ~lo for stopping or continuation points, and this is done with the help of the above theorem as before. We proceed in the same way till we reach a stage when not all the points,of, say, n (-) Ln- k . ..l -0 are stopping points. It turns out that some will be stopping and some will be continuation points depending on the value of c. Suppose for a fixed value (n-k-co)/2" co) CO of c, the point «n-k-co)/2, is found to be a continuation point. Now, using the monotonically decreasing property of ~(a,a,co)/C(o,a,co) with respect to a for the two types of cost C that we are considering, it is possible to show just as in Corollary 2.2, that all the points with a < (n-k-co)/2 are continuation points. (a,a,co) Hence, by definition, the point is the point of truncation on the cO_plane. If, for other values of c, all the points of L _ n 1-1 are still stopping points, we move n k 0 on the next set of points L _ _ () 1-1 and so on. It is now 0 n k 1 evident how the points of truncation may be obtained for any value of c. Using (2.92) and (2.87) we are thus led to conclude for the case of constant cost that the set of boundary points for the Bayes procedure consists of the lattice points of a curve, having the following parametric representation: where aO(c) is the smallest positive integer such that (2a+c) for a fixed c. Obviously a 0 lar notation aO(c). aO(c) 2a (2a+c+1) < 1 A depends on c and hence the particu- In fact, it is easily seen from (2.97) that is decreasing in c. This means that the point of truncation moves to the side of the origin as c increases. The set of truncation points for the case of absolute deviation cost is also given by the same type of curve as (2.96) where (using (2.87) and (2.90» aO(c) is nov defined to be the smallest positive integer for which 1 2 (2a + for a fixed c. C + 1) 1 • For large A, < a O( c) 1 A is given approximately by 1 A • As a numerical illustration, we give below for A = 10, same of the points of truncation (2.96). ·e constant cost: c a o(c) 1 2 4 1 absolute deviation cost: c 1 2 3 4 5 6 • f1. aO(c) 20 17 16 15 14 13 Finally, we note that if the point (2.95) is a truncation point, then the point «n-k-co)/2, (n-k-co)/2, eO + 1) belongs to L n-kfl and hence, is a stopping point from the very definition of L k' We nmay formally put this observation in the form of the following lemma which may otherwise be proved directly from the definitions of aO(e) from (2.97) and (2.98). Lemma 2.10. ROint, then 1£ (ao(c), aO(e), c) (ao(c)-l, aO(c)-l, c+l) represents a truncation belongs to the stopping set. 71 We shall use this lemma in the next section. 2.10. Procedure for obtaining the optimum boundary in the trinomial problem. We describe in this section how to obtain the optimum boundary systematically, once the points of truncation are known, ~sing some recursion relations as well as some simple facts about the optimum stopping and continuation sets as embodied in Lemmas 2.5, 2.9, and 2.10. Because of the symmetry of the problem, the optimum boundary is symmetric With respect to the plane a=b. generality, we may thus consider only a Without any loss of >b • Suppose the points of truncation (aO(c), aO(c), c) -e as indicated in the previous section. are obtained Consider any particular c.. plane, in which, omitting the fixed co-ordinate for the moment, (ao(c), aO(c» is the point of truncation. points on the line a = a 0 By LeInma 2.9, are stopping points. all the Moreover, as noted earlier in the preVious section all the points on the diagonal a-b With a < a 0 are continuation points. Finally, the recursion rela- tion (2. 1) reduces to + a+~c p(a,b,c+l) + C(a,b,c17 where (2.101) and po(a,b,c) = A EL:lnl .. n21Ia,b,~7 .. A ~:~J ' 72 (2.102) (a,b,c), for constant cost , for absolute deviation cost. For stopping points points p = pa , a known function and for continuation .. p <p . a , as follows from Lemma. 2.2. Thus a point may be examined as a stopping point or not, according as p = p0 or 0 p < Pat this point. o Consider the points on the line a = a • 1, starting from (ao.l, aO.l). (ao,ao_l,c), (ao_l, a O, c) are stopping points, as noted earlier, by Lemma 2.9. Hence, So is (a 0_ 1, a 0_ 1, c+l) by Lema 2.10. p(ao.l,ao.l,c) may be obtained from (2.100). Also, since (ao.l, aO_l, c+l) is a stopping point, by Lemma 2.9 again, all the points on the line a = aO.l on the (c+l)-plane are stopping points. Using this fact, the p-values associated with the other points on the line a = aO_l on the of (2.100) successively fram c-plane may be calculated with the help (ao_l,ao.l,c) onwards and hence may be determined to be stopping points or not. One needs to proceed a till one encounters a stopping point on this line a = a - 1 for the first time. All further points on this line can easily be shown to be stopping points with the help of Lemma 2.5. The procedure described in the above parag~ph determines the boundary point (the first stopping point encountered) on the line a = a o( c) - 1 on the c-plane. This is nO"W carried out for all poss i- ble values of c, before one considers finding out the boundary point on the next line a = aO(c) • 2. For a fixed c, we start fram the o O point (a _2, a _2,c) and proceed to the left on the same c-plane e ~ as before. It is to be noticed that the p-values of the points (a o_2, a O_2_k, c+l), k ~ 0, that are needed in this connection, must have already been obtained in the procedure described so far (if they are not already stopping points, of course). carried out for all possible values of c. point on the lines a = aO(c) This is now In this way the boundary -k are obtained for k = 3, 4, etc., successively. The boundary points on the part a on a ~ b, by symmetry. ~ b are obtained from those The description of the procedure is thus complete. 2.11 Some points useful for computation of the optimum boundary. -e For convenience of' computation we may replace (2.100)" as in the binomial problem.1 by an analogous relation in terms of the socalled gain function V(a,b,c) defined as (2.103) Using the relation (2.100) becomes in terms of V: + a+~c V(a,b+l,c) + a+~c V(a,b,c+l) .. C(a,b,c17 where (2.106) = Va (a,b,c) A ~b+"b a+ c • Alternatively, (2.105) may be put in terms of the so-called net gain function, (2.107) M(a,b,c) M(a,b,c) = defined as V(a,b,c) - Va (a,b,c) , as follows: (2.1C8) M(a, b,c) = max£o, . a+~ c M(a+l, b,c) + a+ ~ c M(a, b+l,c) for a t b. For a =b , (2.109) M(a,a,c) = V(a/a,c) = max £0, A(a,a,c) .. C(a,a,c)+ ~c M(a+l,a,c17 We give an expression for C(a,b,c) cost that might be useful for computation: for the absolute deviation 75 Lemma 2.11 (2.110) C(a"b"c) E£ = where /7t1-7t2 f fa" b,,2,7 4 max (a,b) a + b + c g{a,b) is defined in (2.74). • g(a,b) CHAPI'ER III BAYES PROCEDURES WITH MODIFIED LOSS STRUCTURE 3.0 Introduction and s,ummary The modified loss structures as defined in (1.3) and (1.10) are only some special cases of the general situation when loss incurred at any stage depends on the observed outcome. One important feature of these special cases is, however, that any optimum sequential procedure for a decision problem involving this type of loss must be truncated at A. This follows intuitively from the fact that any terminal decision after A observations must be correct in the sense that no loss is involved and the situation cannot be improved upon by taking further observations whereas any further observation is going to cost us more for any type of positive cost function. This argument, may, however, be put in formal mathematical terminologies by developing an analogous version of lemma 2.4 for the modified loss structure. It is well-known £6, 27 how to obtain the Bayes sequential procedure within the class of procedures truncated at a fixed stage, by the working backwards technique, for loss and cost depending on the observed outcome in general. The existence of the Bayes pro- cedures for the various decision problems involving the modified loss structure are thus assured. The problem is then to give a simpler 77 characterization of the Bayes procedures that might be applicable readily in actual computation. Section ,.1 contains such a charac- terization for the general model that is analogous to (although not exactly the same as) that given in £27 for the traditional loss, con- stant cost, and independent and identically distributed sequence of observations. A further simplification in the characterization results as shown in section ,.2 when one considers the appropriate absolute deviation type of cost corresponding to a modified loss. As noted already in section 1.4, it follows from physical considerations that the modified loss structure fits more naturally with the absolute deviation cost. The results of the first two sections are then applied in section ,., and tively. ,.4 to the binomial and the trinomial model respec- It is in section ,., that a rather undesirable feature of the Bayes procedures with the modified loss structure compared to those with linear loss structure is illustrated in details with the binomial model. ,.1 Characterization of the Bayes procedure with a 6eneral model. Extending the results in £2::'7 in the desired direction we arrive at the intended characterization stated as a theorem below. Apart from the assumptions of independent observations and constant (but may depend on the unknown parameter) cost per observation, we introduce the assumptions regarding the particular structure of the loss in the appropriate places. We shall retain the notation of section 2.1 as far as possible changing only the relevant ones. 78 Let Notations: be the loss incurred due to action a when is the true :parameter. Q after observing xl, ••• ,x , j i For consistency of notation we de- note by i = 1 1 2; the loss incurred when a ~ and when is true. (3.0) Let We assume the usual [" 9 min i=1,2 ljr0 5~ _ (ljr gI Let ail a l(g), j = O,1,2, ••• ,nJ i = 1,2. Lj(g,a ), i j = 0,1,2, ••• ,n. be an arbitrary (but measurable) sequential decision :procedure truncated at Let .~Lj(g, Lj{Q, a ). i _7 measurability and o = Aj(g) 5;: (ljr, ¢) = e@ is taken without taking any observation i integrability conditions for Lj(l,a i ) Q nl i.e," + 1jr1+ ••• + ljrn ~ 1 ¢g) a. e. for all denote the Bayes procedure for Q € s• (0 . Also let j = 0,1,2, ••• , n be the conditional :probability of stop];ling with that j or more observations are taken. quely fram ~g,j'S and vice versa. ljr~lj 's j observations given may be obtained uni- 79 With these added notations to those of section 2.1, we may write the following results. The risk r(Q, 5) associated with 5 when Q is true is given by = Expected loss + Expected cost j~O I'"J = {Lj (Q'&2)¢j + Lj(Q'&l)(l-¢j) 1 " :'£'" The average risk associated with 0 (:j.2) r(g,5) Ir(Q 5) ds (Q) = J is given by • GD, Substituting (3.1) in (3.2) and changing the order of integration !: n j=o f , . j At (3.4) > + j (3.6) = ~ j=o C !,.rAj(S.)+jc(s.)7fe J - y)' J J- (g j).7 fg,j dj..Lj .df.1j s,J • 80 The equality is obtained , i f ~j(Sj) o j = 0,l,2, ••• ,n where ().5) when ¢ = ¢g defined as , if It ,if " = Lj(Sj,a l ) = " < Lj (Sj,a 2 ) , = " • u is any arbitrary number, 0 <u <1 • Assumption (L ): The loss depends on the observations in a l rather special way in the sense that it does not depend on the actual magnitude of the observations. Specifically, Further notations: (3.9) *o + Ijr /) * * ... +'!rn- 1 = 1 *} * 1; ¢*'0 , ..., ¢n- 1 n- • •• , Ijr vJe shall show below that with the restriction (3.8), we may write (3.10) 81 Writing the first term in the summation in (3.3) separately and using (3.8) we have (3.11) (3012) j~2 f "';-1 + (1-'"0) j [" (n-l - j-l) { L( 9,a2 ) ~-1 G) J:.i + L(Q"al)(l-¢;_l)}+ (j-l)c(Q17d5j(Q) fs"jdll where (3"01,3) (1-'"o)Al = J "'1 X i.e." f (n-l) 0 j , 82 NOlii' we may write Substituting (3.15) in the sum, (3. 1 7) = A1 + A2 ( ) f g(X1 ) A say, in (3.12), we have 2 . C4.t(x1 ) • (Xl) r(sl ')! thus arriving at (3.10). p n Now remembering the notation (g) = inf 5 t:. r{ g, 5) /::,n we have (3.18) J . J" (xl) r{sl * ' 5 ) fg{Xl)d~(Xl) ~ . * ,5), Hence it 'follows from (3.7), and (3.10), that (3.19) inf {, ( ~n '"o fixed +0 (sL7 and hence (3.21) , if It = If • It may similarly be shown 'that j and hence ';'There by (3.0) and. (3.8) ('.24) ~j(E) = (n-1) min i=1,2 L (e,a i ) , = 1,2, •••,0-1 ; 84 X(s) (n-j) , say. and Po (s) is defined to be identically ° for all S • We may thus state the results in the form of the following theorem: Theorem 3.1. Under the assumptions of identically and independently distributed sequence of observations and with constant cost per observation, the Bayes terminal decision rule in the class 6 is given by (3.7). n If, (Ll ), i.e., (3.8), can be characterized as: moreover, the loss function satisfies the assumption then the Bayes sampling rule in the class (3.26) "g,j = 6 n {a , if Pn_j(Sj) < ~j(Sj) , I\ .. 1 , if Pn_j(Sj) = ~j(Sj); j = O,l, ••• ,n-l where (3.27) Pn_/s) = min .L~j(£)'Jpn-j-l(siY»fg(y)djJ(y) :{ j ~j(S) + c(sL7 = O,l, ••• ,n-l ; being given by (3.25) • ~ve may express the Bayes sampling rule in words as follows: After any stage of sampling, j, say, j take another observation according as = O,l, ••• ,n-l, we stop Pn_j(Sj) = (n-j)X(g) or or Pn_j(Sj) < (n-j) Xes) • We may alternatively define the Bayes sampling rule in terms of certain sets in ~~ Black'Vlell and Girshick n • For this, let us denote, following .L27 , , (n-j) ~ (s)} , j = 0,1, ••• ,n-1 . It may be verified that where r-·' / \~ois Now let S* jn' (;.;0) j the set of all g's. = 0,1, ••• ,n. Then, (;.31) S*, n = - (S*' , on *' *' Sln' ••• , Snn ) forms the required partition of the sample space 3C n Bayes procedure takes j observations if and only if such that the *, xES. -n In It may also be verified that (3.32) *' San * ' C Sjn j = 0,1, ... ,n 1-There S* = (S* , ••• , S* ) is the partition of the sample space 3e n n on n,n corresponding to the truncated Bayes procedure for the decision problem vTith the traditional loss function (;.3;) i = 1,2, This means in other words that if the truncated Bayes procedure with the loss (;.;;) tells one to stop then so will also be told by the Bayes procedure with the modified loss from physical considerations. (3.8), which is rather obvious 86 3.2 Modified loss structure with absolute deviation cost. He now consider the fol101n.ng special situations: I. Assumption (L ): 2 L (~, a ) l L (Q, a2) = {2 = J oQ , if Q<O , if 0.>0 , if Q < 0 0 L2Q , if Q>O. Then, ~ (~) = Es fQI - fEs Qf . .. . From (3.3'5), it follows that the various sets ponding to where k r l , r 2 corres- (2.7) vTith this modified loss may be defined analogously as in (2.12) or II. 110 , (2.78) in terms Eg Q • Assumption (C ): is a positive integer. It may easily be seen from (3.35), (3.25), (3.27) and (3.26) that with these specializations the Bayes procedure is truncated at n-k. With k=l, i.e., for the absolute deviation type of cost we may thus state the result in the form of the following corollary: Corollary 3.1. The Bayes procedure for the decision problem with the modified loss structure, viz., e 87 L(Q, al;j) =j l 2(n-j) IQ ! , if Q ~ 0, j ~ n , 0 _ n, , if Q ~ 0, J.> 0 , if Q > 0, j = 0,1,2, •••• 0 , if Q ~ 0, j = 0,1,2, .... 0 , if Q > 0, j (3.36) L(Q, a 2;j) = 2(n-j)Q , if Q > 0, ~ n , j ~ n , and with the absolute deviation cost, viz., is truncated at n-l. From now on we shall only consider these two specializations, (3.36) and (3.}1) which lead to the simpler characterization of the Bayes sampling rule in terms of the following quantity, Vj(S) = {(n-j)E$IQI- Pn_j(S); j =0,1,2... , n, o , which may be called the gain function. j~n ; It follows from (3.~7), (3.25) and (3~35), that Vj(S) = max f(n-j) Es /QI - (n-j) {EgIQI - ~EsQ~} 1 (n-j) Es/QI - Jpn-j-l(slY»fs(Y) dil(Y) ;)? - E Now, using the fact that . s /gL7, j = 0,1, ••• ,n-I. (3.38), 88 it fo100ws from (3.39) that Vj(S) = max f(n-j) 'EgQI , f Vj+1(siY»fg(y)dll(y17, )e j :: 0,1" ... "n-1. The Bayes sampling rule may thus be stated as follows: stage of sampling" say, j = 0,1, ••• ,n-1, after any we stop or take another observation according as It may be noted that it follows from the definition (3.38) of Vj that thus corroborating with the corollary 3.1 that the Bayes procedure is truncated at n-1. 3.3 Binomial model. To obtain the optimum (Bayes) sequential procedure for the special problems we have been considering, the results of the previous section are specialized in the usual way, replacing the there by A, and as in (2.36). j there by n" there by p-p. o ,-) .1-0 is defined r 1 and r 2 are defined in an obvious way. Equation (3.41) reduces in this case to , (3.44) Q n 89 where vn,o (a"b) From (3.43) (3.46) = (A-n) f·a+ab - p0 r" n = 0"1,, ••• ,A-l . it follows that VA- l(a"b) =VA- 1 ,0(a"b) = Ia+a b - p0 I for all (a"b). With the help of (3.44) it is possible" at least in principle" to calculate V (a"b) for all possible values of (a"b) and n n . starting from the known values (3.46) for n = A-l. Once these values are known, the optimum sampling rule becomes specified. We now compare this procedure with that ob-roined in the previous chapter with linear loss structure. We first note that the subscript n in Vn(a,b) 4enotes the number of observations already obtained. If we start with an a priori distribution dg(a"b), then the releo 0 vant points (a,b) for which V (a,b) has to be defined are those for n which a+b = (a o + b 0 ) + n • The dependence of V on the stage of sampling, n, is thus" essentially" a dependence on the particular a priori distribution ds(a o' b o ) through a + b. The prior distribution being fixed for a particular o 0 decision problem" we roo.y, alternatively" write (3.44)" omitting the dependence on n" as thus , (3.4) where V(a"b) = maxrV (a"b), a+ab V(a+l,b) + a+bb V(a,b+l)7 0 - , 90 (3.48 ) vo(a" b) = (A + a 0 + b 0 - a - b) I...!:... a+b - p0 , (which is not to be confused with the earlier V (a"b) . The difference of 0 in (3.44». (3.47) and (2.67) lies in the difference between the two different definitions (3.47) and (2.67) of Vo(a"b). It may be noted that the nature of a point (a"b) as to whether it is a stopping point or a continuation point as determined with the help of (2.67) did not depend on the particular a priori distribution one starts with" whereas this is the situation ,in the present case with (3.47). This is an undesirable feature of the procedures con- sidered in this chapter in comparison to those of Chapter II in the sense that one has to compute the optimal boundary separately for each prior distribution" whereas" as noted earlier in section 2" for the procedures of that section" we need compute the optimal boundary only for the particular a priori distribution dg(l"l) and the optimal boundary for any other a priori distribution may be obtained by a simple change in the frame of reference. It is possible to prove the convexity of the two stopping regions in this case (for the general model) just as in the proof of lemma 2.5 for the traditional loss. This result may analogously be used along with the fact that all the points (a"b) with a+b a o + b0 + A - I = are stopping points" as in the traditional case" to arrive at a result corresponding to lemma region of sampling. 2.6 , delimiting the We have" however" preferred to show this directly 91 using (3.44). For simplicity we only consider the symmetric case, Le., with p = 1/2. o LeIllID.a 3.1 Vn (a,a) > Vn,o (a,a) = 0, V (a,b) n = Vn,o (a,b) n < A-I, all a, for Ja-bl ~ A-n, n < A-I. Proof: (a) Putting a =b = max;-O, - Vn(a,a) in (3.44). and (3.45) with Po = 1/2 , 1 1 -2 (Vn+ l(a+l,a) + -2 Vn l(a,a+l) + -7 Now, because of symmetry of the problem, Vn(a,b) = Vn(b,a) all (a,b), all n; and hence in particular, '{tie, thus, have from (3.51) V (a,a) n = max -rO, Vn+ 1(a+l,a)7 - Now from the definition (3.44) of V, it follows that Vn+ l(a+l,a) > Vn+ 1, o(a+l,a) = A - n - 1 > o 2(2a + 1) for n < A-I. Hence Vn(a,a) = Vn+ l(a+l,a) > 0 = Vn,o (a,a) for all a, n < A-I • 92 (b) (3.51) for Because of symmetry, it is sufficient to prove a > b only, i.e., to show Vn (a+k,a) We prove this n = m, for k ~ A-n, n ~ A-l • (3.56) by induction on n. Suppose (3.56) is true for we show that From = Vn,o (a+k,a), (3.56) is also true n = m-l. (3.44) and (3.45) it follows that -:--'~ V (a+k+l,a) + m a + 2a+k Vm(a+k,a+l)- 7 From induction hypothesis, we get (3.58) ., (a+k+l,a) = V (a+k+l,a) = m m,o and (3.59) Hence, Vm(a+k,a+l) = Vm,o (a+k,a+l) = (A-m)(k-l) 2(2a+k+l) (3.57) reduces to = V 1 m- Thus by - • k > A-m+l • ' ,0 (a+k,a), k > A-m:I k > A-m:I • (3.56) is true for n = m-l. But (3.56) is true for n = A-l, (3.46). Hence (3.56) follows by induction and the proof of the lemma is complete. When interpreted in words, part (a) of the above lemma means that the points (a,a) on the neutral boundary that may be reached 93 at any stage of sampling up to A-2 are continuation points. We stop anyhow after stage A-l, whatever the point (a,b) is reached. Hence the point of truncation, may be defined as the point (ao,a o ) where a a O O = (a o+b 0 +A-l)/2, = (ao+b 0+A-l)/2 in case a +b +A-l is even or otherwise 00- + 1/2 in case a0 +b +A-l is 0 odd. Part (b) of ' the above lemma then asserts that the optimal region of sampling lies o 0 within the square bounded above by a = a and on the right by b = a • A possible method for obtaining the optimum boundary numerically may be described analogously as in section 2.5 with the help of (3.47), starting from the point of truncation and working back'Wards. 3.4 Trinomial model. For this specialization, we need put 1-10 , r l' r 2 of. section 3.1. (2.86). (3.61) ~ = ~l - ~2 in the results are defined in the same way as in Equation (3.41) reduces in this case to Vn(a,b,c) = max;-V - n,o (a,b,c), a+b: c Vn+ l(a+l,b,c) where V n,o (a,b,c) = (A-n) ~bb: a ++ c • Again, it follows from (3.43) that VA l(a,b,c) - = VA- 1 ,0 (a,b,c) = ~bb , a+ + c for all (a,b,c) ~ With the help of (3.61) it is possible, at least in principle, to calculate V (a,b,c) for all possible values of (a,b,c) and n n starting from the kno'Vm values (3.63) for n = A-l. Once these values are kn01m, the optimum sampling rule becomes specified. As in the binomial problem, vTe may alternatively write (3.61) as (3.64) V(a,b,c) = maxrVo(a,b,c), + a+ ~V(a,b+l,c) :L a+~ c V(a+l,b,c) . c + a+ ~ c V(a,b,c+117 , 'VThere ( 3. 6) 5 and V o( a,b,c ) = (A + 'a_b I a o + b o + Co - a - b - c ) ~ (a, b , c ) represents the starting point. 000 The same remarks as in ~he binomial case apply in comparing (3.64) with (2.105). We state below without proof a lemma corresponding to Lemma 3.1 for the binomial problem. Lemma 3.2 (a) V (a,a,e) > V (a,a,c) = 0, n < A-l, all a,c n n,o (b) V n(a,b,c) , for ja-bl ~ A-n, n = Vn,o (a,b,c) 5 A-l, all c. The proof is analogous to that of Lemma 3.1. Just like Lemzna 2.9, this one also delimits the region of sampling Within a certain region defined as in Lemma 2.9, where n is now replaced by a + b + c + A-l • ° ° ° Using the lemma above and the recursion relation (3.64), a possi- ble method for obtaining the optimum boundary systematically may be described analogously as in section 2.10 starting from the points of truncation (ao(c), aO(c),c) satisfying 2a~(c)+c where aO(c) is the smallest positive integer > a + b + c + A-l, ° ° ° c > c - ° 95 It follows from the ~efinition (;,66) of aO(c) that non increasing in aO{c) is c. Suppose we start With a unifoT;m prior i.e., With a0 =0 b = c 0 Then for A = 10, =1 the points of truncation are obtained from (;.66) as c 1 ~ 6 5 5 4 5 4 4 6 7 These may be compared With those values obtained in page 70. CHAPTER IV ASYMPTeJrIC BEHAVIOUR OF OPTIMAL SOWTION 4.0 Introduction and summary In this chapter we study the limiting behaviour of the optimum boundaries of the various sequential decision problems considered in Chapter II under limiting conditions which tend to require large samples. As sample size is not one of the given parameters of a sequential problem., a natural approach to the large sample theory is to make the cost of an observation tend to zero (when the loss is held fixed) as considered by Vlald f39!,7. Since it is the relative magnitude of the two types of cost., namely., the cost of an observation and the cost of making a wrong decision., that matters in 0btaining the optimum boundaries., an equivalent approach is to make the loss tend to infinity keeping the cost fixed. We consider this alternative approach as it is particularly suited for our sequential decision problems. The limiting condition suggested can then be ob- tained by letting the parameter A in the various loss functions approach infinity. From physical considerations it is apparent that as A becomes larger, more and more sampling is needed to reach the boundary. Thus, in the limiting case, any point with finite co-ordinates (a,b) or (a,b,c), whichever the case may be, will be Within the continuation 97 set. The boundary wlll consist of points With their co-ordinates also approaching infinity. The limiting behvaiour of the boundary" if any" has to be specified only in terms of suitably normalized co-ordinates. The first point to be resolved in this study" then" is what constitutes a suitable normalization of the co-ordinates in each case" in order to Jrield non-degenerate limits. It turns out that the appro- priate normalization for a given case depends on the cost tunct10n • Throughout this chapter tbebinamial problem is considered for = 1/2. The asymptotic behaviour of the optimal o solution has been studied by Morigut1 and Robbins £39,.7 for this simplicity with l' problem With constant cost. Following them closely the problem of finding the asymptotic optimal solution for the binomial problem With absolute deviation cost is reduced in section 4.1 to that of finding the solution of a partial differential equation free boundary value problem. In section 4.2 certain series representations of the latter solution are obtained and compared with those given by Moriguti and Robbins for the corresponding case with constant cost. Sections 4.3 and 4.4 correspond to sections tively for the trinomial problem. 4.1 and 4.2 respec- The m-eIJ representation of the optimal solution is obtained in this case for both types of cost. The formal expansions of the asymptotic optimal boundaries for the various decision problems" obtained in this chapter" are useful in giving us some idea of the asymptotic shapes of the Bayes sequen- tial testing regions" a concept developed by Schwarz what similar context. £3!J:.7 in a some- We hope that the exact boundary for large A can be computed from the asymptotic results involVing no appreciable error. 4.1 Binomial problem. As shown in Chapter III (2.67)1 the optimum boundary can be cha.ra.cterized in terms of the gain function V(a"b) which assumes" on the boundary and outside it" the value (4.2) V(a"b) = a:b v(a+l"b) + a~b V(a"b+l) - C(a,b) where C(a"b) I for constant cost, " for absolute deviation cost. It is possible to give" as in (2.70)" a similar (With obvious modifications) cha.ra.cter1z~dion of the optimum boundary in terms of the ~ ~ (4.4) function" M(a"b) = V(a"b) - Vo(a,b) • We now consider what kind of limiting conditions are appropriate if the boundary :L$ to be non-degenerate. Without going into details we state that so long as Po is a fixed positive number bounded away from 0 and 1" a and b must approach infinity at the same rate in order tbat the asymptotic boundary be non-degenerate. this limiting condition" the problem of testing p p In < p 0 against - > Po reduces after proper standardization to that of testing p ~ J.1 0 99 against I.l. > I.l.o , where Gaussian process. is the mean rate of increase in a standard I.l. With p o = 1/2, I.l. 0 = O. In this symmetric case, it is the limiting tendencies of the two pertinent parameters a+b;: u and a-b: v that determines whether or not the optimal boundary is non-degenerate asymptotically. 11!J:.7 It is possible -to show from Chernoff that the appropriate normalization of u, v and V or M for a non-degenerate boundary depends on the tY.l?e of cost and can, in fact, be determined as given below. The appropriate nor.malization consists of: Case (i): - constant cost: ..;..;.;..;....-.;",.;,;;~ f Case (ii): I (4.6) 2 (a+b)/A / 3 , (a_b)/Al / 3 , absolute deviation cost: - x = u/A = y = V/Al / 2 = (a_b)/Al / 2 V(x,y) (a+b)/A = V(a,b)/Al / 2 Moriguti and Robbins ["39.7 , , • studied the limiting behaviour of the op- timum boundary under normalization (4.5). this chapter under nor.malization (4.6). L the limiting tendency in which a,b x,y in (4.6) remain finite. We shall do the same in For convenience we denote by ~ 00 along with A such -that 100 Lemma 4.1 say , where and ..f ¢ are the standard normal density and disttibuticn function respectively. Proof: It is we1l-known~15, p.25~7 that the beta density of p converges to the normal density under L. The standardized variable is t = , P - E(p) JV{p) where E(p) = = 1 1 '2 + A1/2 • ..z... 2x 2 and 1 V(p) ab = (a+b)2(a+b+l) = ~ _...:L. 4x2 A(x + Hence, p = = so that (4.6) E(p) + 1:+...:L. 2 2x t JVGiJ 1 • A1/ 2 + t 2-(xA-)=-lj'=2 . • 1 A j) 101 NO'tv if t fixed a l f is N( 0,1) i it follovTS from Theorem 2 of ["1'L7 that for and b l , which implies that EJtl is bounded and hence the last term in (4.6) converges to zero in mean as A"":> co. Thus 1 (y 1 Iyl } r¢(- ..LU) 7.. JX- 2 - jXIy I = -"'\¢(-) + .L!J. /Xl IX The following development parallels very closely the corresponding development sketched very briefly by Moriguti and Robbins ["327 for the case of constant cost. It follows from (4.2) that within the go-region V(x,y) satisfies the following relation~ (4.7) Al / 2[V(x,y) - V(x+A-\Y17 =: ~ Al/2[V(X+A-l,Y+A-l/2_'8(X+A-\Y) + V(X+A- l ,y_A- l / 217 + (y/2x)A- l / 2 Al / 2[V(X+A-\Y+A- l / 2 ) _ V(X+A-l,y_A-l/~7 .. c( ~Ax + ~Al/2y, ~Ax- ~ Multiplying both sides of (4.7) by Al / 2 , letting A -> co Al / 2 y) • and using lemma 4.1, we see that V satisfies, under L, the following partial differential equation: 102 21 V 1 219; - --= --" x 2 21 y2 (4.8) y + x "V o y - -( C x,y ) • On or outside the boundary,V(x,y) is given by 'lvhere (4.10) _ 1/2.Al / 2y)/Al / 2 = JyJ/(2x). If follows from (4.10) that Vo satisfies, in the region y > 0, the following homogeneous partial differential equation: (4.11) 1 "2 2 o Vo y d Vo + -- + 0 y2 x " x 0 V0 "y = 0 • Thus, the standardized net gain function, (4.12) w(x,y) ~ 2 M(a,b)/Al / =.LrV(a,b). l/ - V0 (a,b)7/A - 2 = V(x,Y)-V 0 (x,y) satisfies, under L, the partial differential equation, 21 WoW --o y + --0 x (4.1;) = -( C x,y) in the region between the straight line y=O and the boundary curve (4.14) say. At the boundary, y = 0, synunetry implies the condition: (4.15) and this, together with 21 V \ . (4.16) o ay I y=O = 1 2"X 103 yields the boundary condition ~I oyI At = y=o 1 -2x Y = Yl (x), a continuity requirement yields the boundary condition I (4.18) =0 WI IY=Yl ( x) Another boundary condition is obtained by the same arguments as in Moriguti and Robbins. boundary, region. Suppose that the point (u,v) is on the (u+l,v+l) being outside, (u+l,v-l) being inside the goThen = V I ( U, V ) and VI ( U, v) o , VI(u+l,V+l) = VI(u+l,v+l); o where and = Vo«u+v)/2, V~(u,V) Let us further assume that v-l > VI(U,V) = VI(U,V) = AV/2u, o (u-v)/2) • o. Then substitution of and VI(u+l,v+l) = VI(u+l,v+l) = A(V+l)/2(u+l) 0 into V'(u,V) = '2~ fVI(u+l,v+l) + V'(u+l,v- 117 rv' (u+l,v+l ) .. VI (u+l,v-l_)7 - C(u+v 2 ' 2u-v) v_ + 2u obtained from (4.2), yields 104 Av 2u = I( )7 '21 _r A(v+1) 2(u+1) + v u+1.. v-1_ r v + -2u i.e ... vr(u+1..v-l) A v+1) - VI( u+1 ..v- 1)7 - c(u+v u-v) 2 u+1 _ .- 2 .. - 2 =~f~:i~ 2u + u - v c(u+v .. u-v) 2"2 · But V~(u+1.. V-1) = A(v-1) 2(u+iJ- • Thus 2u c(u+v .. E.:! ) VI (u+1.. v-1) - V~( u+1..v-l) = 2 2 u-v Le., W(X+A-l,Y_A- 1/ 2 ) 1/~ = x-y A c(~ Ax + ¥1/2y .. ~Ax _ ~ A1/ 2 y) • Expanding the- left hand side and using Lem:ma 4.1.. we get But W(x..y) = 0.. since (x.. y) is on the boundary. Hence.. (4.19) implies that (4.20) Therefore.. in the limit, we should expect that (4.21) oW oY = 0 Y=Y1(x) The problem of finding the optimum boundary along with the optimum net gain function thus reduces to the following 105 FBEE.BOUllJDARY PROBLEM: To solve for the unknown W(x"y) and the unknown boundary Y (x)" l given that W satisfies the partial differential equation (4.22) 1 o2w + I. 2 Oy2 £..!! .£.1! + x 0 y 0 y + 0 W 0 x = C(x"y) where = - C(x"y) Y 2x 1 ( Y + - ) ¢ (-) )Xl JX y .- jX (_ jX ..L)} . in the region (4.24) and where W(x"y) (4.25) U! , o y satisfies the boundary condition = - 2~ "for y=O·· . 0 <x < for 0 < x < co for 0 <x < co " , and the free boundary conditions (4.26) Y1(x) W\ Y = Y ( x ) l -- 0 o wi o Y Y = Yl(x) - 0 " , gives the upper boundary. With respect to Y" and W(x"y) (Xl " • The problem being symmetrical = W(x,-y) , the lower boundary is given simply by -Y (x). Thus it is sufficient to consider only Y > O. l We now make a transformation to reduce (4.22) to the heat equation; we conclude" omitting the proof: 106 Lemma 4.2. The non-homogeneous partial differential equation , where f(x,y) satisfies the corresponding homogeneous partial dif- ferential equation, is reduced to the heat equation, viz., -" Fl = 0 , " t by the following transformation : (4.;0) --1 2x t = z = X. x , , It is easily verified that C(x,y) satisfies the homogeneous partial differential equation: 1 • 2 Hence, by the lemma above, if we define (4.;2) where t Wl(t,z) = w(x,y) • X C(x,y) and z are defined by (4.;0), then the free-boundary problem stated above reduces to the following TRANSFOBMED FREE·BOUNDARY PROBLEM: To solve for the unknown WI(t,z) given that WI and the unknown boundary zl(t), satisfies the following partial differential equation: 107 = (4.33) ) 0 < Z ~ Zl(t) 0 in '\ /O<t<co ,. , and satisfies the following boundary conditions: (4.34) oWl \ oZI - i 0 Cl i 1 = - az i z=O '2" (4.36) ! 1 ""2 ' Z=O , I Wl = Z=Zl(t) l! -1 o Z I Z=Zl(t) = .. Cl(t'Zl(t» - (;) W = - I 0 Clt . ,! ~l Z=Zl(t) . /I ~-- where Cl(toZ) , 4.2 = 12t C( ~ Z) 2t'"2t • Series expansion for the optimal solution for large x (binomial proglem) As it is very difficult to find the solution of the transformed free-boundary problem, our objective in the following analysis is limited to finding a formal series expansion of its solution for small t (as in Moriguti and Robbins ["39,7). Thus, assuming a series expansion for the unknown boundary as (4.38) for small t, where the range of the summation index m will be settled presently} we are led to consider solutions of the heat equation (4.33) involving some powers of t. It may be verified that the function 108 where (4.40) denotes the confluent hypergeametric function and n, satisfies the heat equation (4.33). The most general form of the solution of this family and satisfies A are constants, n (4.33) which belongs to (4.34) is, thus, given by (4.41) where the range of the summation index n is detennined along with that of m in (4.38) from the two free boundary conditions (4.35) and (4.36). It will be seen below that these two conditions are sufficient to determine the unknown constants From (4.41) and (4.40), C and A • m n (4.35) and (4.36) reduce to: 109 To facilitate the later analysis we express the ¢ and tions in terms of confluent hypergeometric functions. f func- To this end we note the fOlloWing results (Erdelyi £117) 2 ¢(x) = ....!.- IF1(a, a; _x /2), j2ic - co < X < co J a:j °. (4.44) and in particular 2y IF (1/2, ;/2; y) = lF (1/2, 1/2; y) - lF (-1/2, 1/2; y). 1 l Therefore, putting y = _x2/2 l , (4.45) =....!.- fii 2 lFl (-1/2,1/2;-X /2). utilizing (4.44) and (4.45), (4.42) and (4.4;) can be expressed as: ~ A t n lFl (-n,l/2;-zi(t)/4t) n =~ Zl(t)-....!.- -J_ j2tj21r lFl (-1/2,1/2;-zi(t)/4t) n Zl(t) 2 1 1 1 Zl(t) .2 t lFl (-n+l,;/2;-Zl(t)/4t) = '2 - 2t IF1.(l/2,}'2;'-Zl(t)/4t) n fiifii . I: nA t We now consider the range of m and n in (4.38) and (4.41) respectively. vie first note that in (4.;8) m must be non-negative, otherWise the series might diverge for sufficiently small t. Next, it has been shown in Chernoff £1!!.7 that Yl(x)//X is decreasing in 110 x for the case of constant cost. His argument remains valid for absolute deviation cost too. This means, in terms of the transformed variables, that zl(t)/j2t is increasing in t, i.e., (4.46) This rules out the possibility that m < 1/2, because, otherwise, for sufficiently small t .. the term involving a negative coefficient in (4.46) might be dominant making (4.46) nonpositive contrary to hypothesis. Thus if (4.47) zl(t) = I: m~ 1/2 c tm m .. then ~e zi(t) 4t (4.4B) say. Substituting hand side of = 1: m>o dm t m , (4.48) in (4.42), we notice that in the right (4.42) the minimum power of t is -1/2. Hence the series on the left hand side should start with n what steps = -1/'2. Now by n should increase in the left hand side of (4.4'2) de- pends, of course, by what steps m increases in plicity, we take the step in (4.38) as (4.38). For sim- 1/2, any smaller steps only increasing the number of unknown coefficients to be determined. We are thus finally led to consider both m and n as half-integers.. m starting from 1/2 and n starting from -1/2 in (4.38) and (4.41) respectively. If we now substitute 111 in (4.42) and (4.43) and equate the coefficients of equal powers of t from both sides of the equations we are led to the following results. Equating coefficients of t- l / 2 from both sides of (4'.42), vle get (4.50) ..l _..l j2j21r. lFl (-1/2, 1/2;-ci/4) = A_ l / 2 lFl (1/2,l/2; ..ci/4 ) , while equating coefficients of (4.51) 1 - "2 l t- . from both sides of (4.43), we get 1 fi1r for all a contradiction. Hence Cl = 0 and hence from (4.51), A_ / = -1/2./1c. l 2 Substituting these values for the constants subsequent equations, the next two coefficients C , A_ / l l 2 C , A are determined. 2 o These are then substituted in the subsequent equations. is repeated, giving us the follmv1ng result: C~ - C4n_l :J:. 0, n = 1,2,3,. .. , in the This process 112 and all other coefficients are zero. The first few values are found to be: , C' = '"1/2 C'2 - 12 C' 3 197 ",5/2 = TI)O 1 _- l ~/2 A'0 , , We thus arrive at the = 1 - '2 '" , 1 ",1/2 , 1 ",3/2 , Ai= "4 A'2 -- b , A3 529 = 14~O follo~ng -1/2 '" 5/2' • series expansion of the solution of the transformed free boundary problem: and • • -e • In terms of the original variables x and y (see (4.6)) we get the following series expansions, for large x, of the optimum boundary as 113 _ J?ic • -lr1 4 jX - - _ 48. ~2 + . X 9 J£~ 1 7' 1 o,a • -±r, - · ._.7 x "l- and of the optimum net gain function (4.12) as .. ... . In particular, 1 • U(2 - ••• x - -·..__7 For the case of constant cost, Moriguti and Robbins the fo11oWing~ 1 Y1(X) = ¥x - 1 48x4 - ... • 132.7 found 114 _2 _ F 3d6x5 1 1 • •• (-5, 1:; _ L 2 2 ) 2x I .. 5 where x and yare defined in (4.5) ... , and • 4.3 !!:!.!~omJ:.al llr:obl~. As shown in Chapter II, (2.105), the optimum boundary can be characterized in terms of the gain function V(a,b,c) which assumes, on and outside the boundary the value , and satisfies in the go-region the recursion relation (4.58) V(a,b,c) = a+ ~ c V(a+l,b,c) + a+D+C ? V(a,b+l,c) + ~b a+ +cV(a,b,c+l) - C(a,b,c) where (4.59) C(a,b,c) =5 'I E£ /1t for constant cost , 1 1 -1C 2 i fa, b,!:.7 1 for absolute deviation cost 115 It is possible to give" as in (2.108)" a similar (1-lith obvious modifications) characterization of the optimum boundary in terms of the net gain function , (4.60) M(a"b,c) = V(a"b"c) - V (a,b,c) o As in the binomial problem, the appropriate limiting conditions for the asymptotic boundary to be non-degenerate depend on the type of cost per observation. a+b = u nent parameters In this problem" apart from the two perti- and a-b = v, we have another" c. clear that if the order of magnitude of a+b c It is is less than that of as these parameters tend to infinity" the trinomial problem essentially reduces asym,ptotically to the binomial problem considered in the last two sections. To retain the essential feature of the tri- nomial problem) then" we keep the order of c The two different kinds of approaches of u the same as that of and v a+b .. to infinity in this problem for the boundaries to be non-degenerate are the same as those in the binomial problem for the corresponding two types of cost function. Specifically they are as follows: The appropriate normalization consists of: case (1): constant cost: x> (4.6l) a-b Y = Al !} z = a+b+c A2!3 • , ' V(a,b"c) A2 } z .. >x j 0 j 116 absolute deviation cost: case (ii): I ( a+b , = a-b Al !2 ; = a+b+c A , x =A y z x> 0 ; 1 ) / (4.62) \ \t \,... V(x,y,z)= V(a,b,c) Al !2 z > x ; • We denote, for convenience, the two types of limiting tendencies specified in the above two normalizati0ns by L and L2 respectively. l He note that if the order of magnitude of c is less than that of a+b, then the corresponding solutions can be obtained as a special case with z = x. We now treat the two cases separately. case (i). (4.63 ) + J... 2z - 1. It follows fram (4.58) that 117 Letting A --> we see that V satisfies, under L , the following 1 partial differential equation: (4.64) . 00 oV o z 2 0 V + x ~ z oy x - = 2Z oV ox On or outside the boundary , V(X'Yi Z) oV + l. z oy - 1 • is given by (4.65) where (4.66) Vo(x,y,z) = Vo(a,b,c)/A2/ 3 = Vo(~A2/3x+ ~Al/3y,~A2/3x_ ~Al/3y, A2/3Z.A2/3x)/A2/3 = fyl /z , which satisfies, in the region y > 0, the homogeneous partial differential equation: (4.67) x 2z x + + -Z Thus the normalized net gain function . W(x,y,z) ( 4.68) oV oz ----.£ = °. , (~ = M(a,b,c) / A2/3 =-;-V(a,b,c)-V0 a,b,c),A2/3 = - = V(x,y,z) - Vo (x,y,z) satisfies, under L , the partial differential equation: l x 2l2w x 0 W yoW 0 W --+--+--+-=1. 2z Oy2 z 0 x z 0 y 0 y We state below a lemma, Without proof, that will be used in the second case as well as in Chapter V. 118 Lemma 4.3 !f the parameters a,b,c of the Dirichlet distribution ;(a,b,c) tend to infinity along With A in such a way that i i Ij , (4.70) a+b A2a = x a-b ACt = Y a+b+c = z ! / '\ "\ j l A 2Ct Ct>O remain finite, then C(x,y,z), say. case (ii): Just as in case (i), it follows from r -( A1/2_ V x,y,z = - zx2 A1/2-r ) - -( VXl y, Z+A-1 (4.58) in this case L7 -( -1 -1/2 ,z+A-1) -2V -( x+A-1,y,z+A-1) V x+A ,y+A -1)7 + -( V x+A -1 ly-A-1/2 ,z+A_ + -x A1/2 z r -(V x+A-1,y,z+A-1) - -(V x"y,z+A-1)- 7 - -1) 7 - -( V x+A -1"y-A-1/2,z+A_ _ C(~ Ax + ¥1/2YI~ Ax _ ~ 2 Al / y, 4z - Ax) • 119 MUltiplying both sides of (4.72) by Al / 2, letting A -> and using Lemma 4.; with a = 1/2, we see that L~1 c; co , V satisfies, under the following partial differential equation: - oV = --x o2v x 0 V y 0 V -( ) --o z 2z --0 y2 + -z --0 x + -z --0 Y - C x,y,z • It turns out that Vo(x,y,z) l 2 / = jYI/z , as in case (i) = V0 (a,b,c)/A , and hence satisfies the equation (4.67). Hence the normalized net gain function, ( 4.7;a) \v(x,y,z) = M( a,b,c) / A1/2 , satisfies, under L2 , the same differential equation (4.7;). Combining the two cases, ive may state that under the appropriate limiting conditions W satisfies for both types of costs per observation, the partial differential equation: x 'O~l x 2'Z" ""2 + Z oY 0 W + oX oWy X o Z oW + '0 z - = C(x,y,z) , where C(x,y,z) ( =) < 1 '.- , for constant cost 1 ?If 1/X ,S¢(L) + .w f~ - fi(- .w)7J~ for absolute IX IX 'deviation cost in the three dimensional region between the plane y=o and the boundary surface y = Y (x,z)" say. At the boundary surface y=O, l symmetry gives the condition '0 VIIloyl y=o = in both cases. This, ° together With 0 Vo~ yl y=o = liz, gives the boundary condition satisfied e 120 by W, in both cases, as o wI GI (4.76) y=o = - 1 z • As before, from continuity, we have i (4.77) i{l = 0 \Y=Y1(x,z) for both cases. , The other free boundary condition can also be derived as in the binomial problem; it turns out to be \ up to 0(A- 1 / 3 ) for case (i) 1 2 ; up to 0(A- / ) for case (ii) (4.78) -e Again, due to symnetry, ll(x,y,z) = vl(x"-y,,z) and the optimum is sJ~etric with respect to the plane y=O. boundary Y1(x"z) Thus it is sufficient to consider only y > 0 • The problem of finding the optimum boundary along with the optimum net gain function is thus reduced to a free-boundary problem which can be stated as follows after the following transformation of variables: r = zx , 1< s = 1. o < s < (Xl t = 2x W1 (r,s,t) X 1 " J r , < (Xl O<t<(Xl J , = w(x,y,z) - x 5(x,y"z) = ,1 s r) W\~, 2~" 2t - -( ) C1 r,s,t , say • 121 TRANSFOBMED FREE-BOUNDARY PROBLEM: To solve for the unknown boundary \'T (r,s,t) where l (r,t) and the unknown 1 satisfies the partial differential equation: W l 6 i o vTl in 0 at= ; \ "\ \- o< s , , ~ sl(r , t) l<r< co O<t< co with the following boundary conditions: o Wli (4.80) d sI\ 8=0 = - 1 -r 0 eli -0 Is - s=o , (4.81) o Cl' (4.82) =- o s l Now =ii I .t:'\ r 1 1"\.1- c:.u for case (i) . 1 - ! J,( )\ -1 . /r;:;z 2t.LT¢(- s) + -s ' ;\p I -S)1 - I 7, for case (ii) :.t"\ j2t j2t . 2 ,j2t - the latter formula reducing to -t2 using (4.84) (4.45). o C1 . Thus , for case (i) as 1 for case (ii). 122 Hence a clIo sl· s=o is 0 for both cases. 4.4 Series expansion for the optimal solution for large x (trinomial problem) Let (4.85) • He no'W consider the two cases separately. case (ih constant constant: . sl(r,t) and W2(r,s,t) satisfies the partial differential equation: To solve for the unknown (4.86) 'Where W2 0~1 ~ - as 'With the boundary conditions: s=o (4.88) = I W2 1 s=sl(r,t) a W2 \ ~ 0 , = 1 s=sl(r,t) = r r , + 2t • As we are interested in solving this free boundary problem only for small t, and Robbins we may assume, as in the free-boundary problem of Moriguti 1 32.7, a series of ex,pansion in t of the unknown boundary surface as (4.90) sl(r,t) = co(r) + Cl(r)t + C2 (r) t and, hence, a solution of form 2 + ••• (4.86) with boundary condition (4.87) of the 123 w2(r l s , t) = E An(r) (4.91) t n l Fl(-n,l/2;-s2/4t ) 1 n • where, unlike the previous situation, the coefficients C (r) m An(r) and It is possib~e to show SoBin Meirigutj,'and may now depend on r. Robbins' problem, that the sUlIlIllation in (4.91) is over integral values of n starting With n = -1. Moreover, the two free boundary conditions (4.88) and (4.89) are sufficient to determine the arb:ttr&J'Y functions An(r) and Cm(r) completely. In fact, they may be obtained successive- ly just as before by equating the coefficients of equal powers of t from both sides of each of the following equations obtained by substituting (4.90) and (4.91) in (4.88) and (4.89); ~ (4.92) 00 n E A (r)t F (-n,1/2;-E E Cm(r)tm_72/4t) n l l n=-l m=o ~ (4.93) E 2nA (r)t n n=-l = n 1 r ~ E 00 = 00 E Cm(r) tr:J1Fl(-n+l,3/2;-EE Cm(r)tm_72/4t) ~o ~o • The results are: and all other ct s and At s vanish. The first few coefficients are: 124 CI(r) C (r) 2 , 16 -6 =- 3 r , 2r- 2 = AO(r) = - '21r Ai(r) = r -; . 896 -10. 15 r , c3(r) = A (r) 2 • • , , = _ l~ r- 7 , 148 -11 r ~(r.> = l5 • • Thus a. series expansion in t of the optimum boundary sl(r,t) for small t may be written as: (4.94) sl(r,t) t 2, - = 2(-) r 16 t 5 896 t 8 -; (-) + - ' - (-) - .... r r l5r2 r • which may be :put in terms of the original variables (see (4.61» . (4.95) Yl(x,z)=x , sl(~' x -k) ~ - =nl ~z (~)fl - .1:..-; (~) + .J-,... (~)2 - •••_7 z ;z =...l....(~)2 rlz - 2x whichever form is convenient. (4.96) Yl ( x,x) As z l5zo z 8 ~ (~) + .J-,...(~) 3x' z l5x z 4 1 ...:..7 0 z-> x, both, however, reduce to - ... 11 - 6x4 = 2x A more :physically meaningful as: re~rametrization of the boundary surface may be given in terms of the :parameter 'fJ = .!r = ~ z , 0 < -ui -<1 (the :pro:portion of untied observations, in terms of the original double dichotomies :problem) as: 125 - ••• (for large z) may be written as 2 5x lF ( 5,,1I· -48z --rr 2,-y2/) 2x + 37~ 11 lFl (8 ,,1I· 2,-y212x )'" - ••• l 2880z In this case W satisfies (4.86) with the boundary conditi~n 2 (4.87) same as in case (i). Only the free-boundary conditions (4.88) and (4.89) are changed to: ) 2\~ 8=sl(r"t - (4.102) 1{ - 2 -t • ...L ~ IFl(-1/2,1/2i-s12(r,t)/4t)+sl(r"t)/r" + l/r respectively. may be taken as For small t, a series expansion in t , of the solution • 126 (4.104) (4.105) sl(r,t) = w2(r,s,t) = E Cm(r) t m m E An(r) t n 1F1(-n,l/2;-s2/4t) n where m and n are half integers and the summations starting from m=3/2 in (4.104) and n . binomial situation. = -1/2 in (4.105), just as in the corresponding . Substituting (4.104) and (4.105) in (4.102) and (4.103)" and equating coefficients of eqt:Ial poWers ot 't':i'rom"both 'sidles of the resulting equations, the unknown coefficients Cm(r)'s and An(r)'s may be determined successively. It is found that and all other Cn •s and An •s are zero. The first fevT non-zero coefficients are: = j1c r A'(r) o = Ai(r) = 3 = - ..:L(fiC) , 12 r - -1 , .fiC ..l.. (/iC ) 2/iC r 2 , _ ..l.. (~)4, ;/it 105§ 144oj1( Thus a series expansion in t - for small t may be written as: r (fiC) 6 • r of the optimum boundary sl(r,t) 127 which may be put in ter-ms of the original variables (see (4.62» ( I. ) I ) (X)1/2' ~ '1-.107 Yl \x,Z = Z • 1 vZ r r.:- J.. 1 - 7'Jt 1 Z 48 2 + as: 1971(2 1 7 160'.16'" - ••~ , Z or, alternatively, in ter-ms of 'W"' , the proportion of untied observationa, and z, the total amount of sampling, as ( 4.10 8) r *( J) J2'Jt-Gr 1 7'Jt 1 197' 'Jt2 1 Y1 z,1J = 4 - _ 1 -"4tr 2 + 1bO..16 ..,- JZ z or, still, alternatively, in ter-ms of If;f and z x, as ~4 T x All three for-ms coincide as z -> x i.e., as 1:jj - ... -> 1 to The net gain function turns out as (4.111) + 2(2'Jt)5/2 529 (x)6 1 F1(-11/2,1/2;-Y2/2x) 4 ~2 r 1 1440 _. 4 z x .J./ c: - ... . _7 • 128 Thus - ... In tbe special easel Z = x I 1 1 ~/2 - -;;rT2 2(21r)?/2 529 + 1440'" 44 1 1172 - ••• x • It may be noted that (4.110) is the same as (4.54)1 the ex,pansion for the boundary Y1(x) viation cost. A1so1 in the binomial problem 'Wi~h absolute de- W(x,y"x) and W(x"O"x) in (4.113) are found to be double the corresponding eXJ?ressions (4.55) and (4.56) for the later situation. CRAFTER V BOUNDS ON ASYMPTCYrIC SOWTIONS 5.0 Introduction and summary. In Chapter IV we obtained a formal expansion in negative powers of x (for large x) of the (normalized) optiIllal (Bayes) boundary, Yl and that of the corresponding (normalized) net gain function W as A -> CIO for the various sequential decision problems considered in Chapter II. It is, however, very difficult to justify these formal expansions. 132,7 The expansion obtained by Moriguti and Robbins for their problem (involving binomial model with constant cost) has been shown by Breakwell and Chernoff fl?;7 to be an asymptotic one as x---> term. CIO in the sense that the error is small compared to the last Their proof is rather involved. We shall consider in this chapter a less ambitious objective of obtaining some bounds on the true boundary using Bather's f§.7 method. for all x, large and small. boundary when x -> CIO These bounds are valid In fact, they converge to the true as well as when x -> o. In as much as these bounds provide even approximately the true boundary for intermediate values of x, they are useful in themselves. Besides, in so far as the expansions of these bounds for large x are consistent with the expansions obtained :i.l'l Chapter IV, they provide an independent check on them. 130 As we are going to follow Bather's methods rather closely, we try to keep matters simpler by retaining his notations as far as ·possible consistent with our previous ones. Chernoff's f1!J:.7, Bather's results, like are concerned with a Gaussian process and deal with the risk function rather than the gain function approach of Moriguti and Robbins sampling rule. ["39.,7 to characterize the optimum (Bayes) Since the optimum boundary remains the same for either approach, our choice in the various studies has been prompted by the relative advantages one offers over the other. In Chapters II and III we concerned ourselves with the risk function as far as the general theory developed with respect to it -e £9, 2!J:.7 could take us and then changed to the gain function for some obvious reasons. The recursion relations developed in Chapter III could only be obtained in terms of the gain function. In Chapter II, the computations of the optimum boundary are easier with the recursion relations involving the gain function rather than with those involving the risk function. In Chapter IV, we retained the gain function in deriving the partial differential equation and the various boundary conditions that it satisfies just to be consistent with Moriguti and Robbins, but this portion could equally well have been carried out in terms of the risk function With slight modifications of the resulting partial differential equations (viz., changing the sign of boundary conditions. thus reversible. TI, etc.) and the For the asymptotic theory the two approaches are In order to adopt Bather's technique, it is, ho~~ver, imperative that one should consider the risk function rather than the 131 gain function, because same of his arguments hinge critically on the In the following analysis, therefore, we adapt the derivation of the various differential equations along with the boundary conditions in terms of the noraalized risk function. These could, however, be easily anticipated for thebinamial problem with absolute deviation cost, from Chernoff's results for the Gaussian process problem with constant cost. Under the particular type of limiting tendencies that we are considering, the trinomial problem would have reduced to a certain (rather specialized) bivariate Gaussian process problem, just as the binomial one reduces to the univariate Gaussian one. ·e We have, however, no special reasons for considering this bivariate Gaussian process problem in order to derive the partial differential equations and the boundary conditions satisfied by the standardized risk function for the trinomial problem. They are simply obtained by modifying, as in the binomial problem, the corresponding expressions in terms of the gain function that ~ are derived in Chapter IV. In section 5.1, we first consider the binomial problem with absolute deviation cost. The associated free-boundary problem (FBP) is stated in terms of the normalized risk, variables, the ~(t) FBP is put in the appropriate form, for the true boundary aCt) obtained. p(x,y'f.Transforming the The upper bound and. its expansion for large tare A similar development is given for the lower bound p(t Section 5.2 f. is concerned with the corresponding developments for the trinomial problem with both types of cost. 4JlIhe ambiguous use of the same symbol p for two different thltufigs may be resolved by referring to the arguments. 5.1 Binomial ;problem 'With absolute deviation cost. We have seen in Chapter II, (a.60) that the optimum boundary can be characterized by the Bayes risk p{a,b) which satisfies in the go-region the recursion relation p(a , b) = a:b C(a,b) = E~lp - , a+bb p(a b+l) + C{a,b) p(a+l,b) + where 1/21 la,E7 ; and assumes on and outside the boundary the value po{a,b) ·e = A Effp - 1/21 'a,E7 = A C(a,b) - A ~ Using the appropriate normalization ~ • (4.62) for this case, namely, a+b x =T y = a-b Al / 2 = p(a,b) Al !2 p{x,y) - A ,• . and modifying the results of section 4.1, we see that, under L, the normalized risk p(x,y) satisfies the partial differential equation OP oy + 0;'; 0 x = - C(x,y) , in the symmetric region o<x< co , e m where Y1{x) is the unknown boundary on which bounds are desired; the boundary conditions are - iLQ o y :: 0 on y = 0 -p on the unknown boundary :Le oy where po (x,y) . = p 0 (a,b)/A1 / 2 • Using (5.1), (5.2), (5.3) , (5.4) and Lemma 4.1, we may write -e = 2. ,~ ¢(..L) Jx L·jX _ M jX § ( _ Jxl IX )} . j Considering only the upper boundary, (5.4) and (5.5) become where the prime denotes differentiation with respect to the second arguJ:DeZ1,t. Let us 001'1 put the above facts in Bather's notation. s = ..1L t = x = = p(x,y) f{s,t) K{s,Yt) IX po{x,y) , , , , Let tt ~4 = c(s,t) Then f(s,t) C(x,y) , satisfies the partial differential. equation: of' as s of7 at _ = + 2t _r ( c s,t) + 0 in Is I < (;: (t) with the following boundary conditions: oo sf = (5.8) ... due to symmetry -e f(s,t) on 0 s =0 , = .~ X(s,t) = ; . {¢(s) -lsI ¢ (-lsI>} _ t-1/,,(s), say whenever Is I o<t< > (;:( t) , cg , and finally ~ ~ (iT( t ), t ) = ~ ~ «(;:( t ), t ) = ...l... -1(Jt (;:(t» • Due to symmetry again, we need consider only s > o. Upper bound: . Following Bather, we consider the following family of separable solutions Of (5.7)! f{s,t) = t r g(s) , r any real constant. 135 It follows from (5.9) that = -1/2. Hence =_ 2t3 / 2 c(s,t) r g(s) must satisfy the ordinary differential equation g"(s) + S gl(S) _ g(s) = - 2t { ¢( s) + s r .! _ rl (- s) -2.:e in order that t- l / 2g(s) 7~ -) is a solution of (5.7) * Now (5.11) does not make sense as the right hand side involves t. In order to make sense we consider for the present an auxiliary problem where everything remains the same as in the original problem except that the cost function is now given by where a is some positive constant. The optimal continuation region for this auxiliary problem then is bounded by straight lines, viz., tel = {(s,t); - The unkno'Wn constant ~ and g( s ) g(s) X< ~l < X} '. are found from the following facts: satisfies the differential equation g" + sgl - g = - 2a {¢(s) + sf in s ~ -1 (-s>-7} With the boundary conditions: (5.13) g'(O) = 0 (5.14) g(X) = L(X) (5.15) g'(X) = L'("-) • , - Now the general symnetrical solution of the homogeneous equation g" + sg' - g =0 may be written as • where y is an arbitrary constant. A particular solution of (5.12) with (5.13) is given by where u 2 1 • "U 2u /2 du f 2 2 y .y /2ijy o i v . ¢(w) dw 0 Let be the general solution~ Then the two constants y determined from (5.14) and (5.15), i.e., fram hex) Writing (5.17) and = (5.18) Lex) , explicitly, we have and Eliminating y from (5.19) and (5.20), we get and X are • 137 which the critical level dete~ines ~ for each a > 0 uniquely. Uniqueness follows from the positivity of the left hand side of (~;'21) which is a consequence of the definitions of the various functions involved therein. If we now replace the constant a by the running variable t, then the resulting solution ;;:(t) It may oe verified" moreover" that ~(t) an upper bound on O:{t). is decreasing in t. will const:l.tute, following Bather, To show this it is sufficient to show that the left hand side of (5.21) is increasing in X. This can be proved in its turn by direct differentiation and by utilization of the differen- '. tial equation that -e vi Vl(s) satisfies" namely" s vies) - Vl (s) (s) + = s 1'21 - i(-st7 It aan also be seen from (5~2l) that '5::(t) --> '5::(t) -> 0 as t _> co co • and t -> 0 and • It is thus possible" in principle" to obtain the actual values of '5::(t) by solving (5.21) for '5:: after substituting a = t. In practice, however" difficulty arises because of the presence of texms involving and V~. the untabulated functions "'1 the definition (5.16) of Vl that 11r t Yl ('5::» - Now it follows directly from 0 , and It can be seen from (5.21) that if some positive terms are omitted from its left hand side" the resulting solution" ~(t)" say" can only exceed '5::( t ) • Thus an upper bound. to the upper bound '5::(t) may be obtained by solVing the equation: t(t)" unlike >::(t)" is amenable to computation using the normal integral It may also be verified from (5.22) that ~(t) is decreasing tables. in and that 1(t) -> t > t _ 00 t -> 0 and ~(t) -> 0 as as 00 • ASymptotic form of upper bound for large t. Lemma 5.1 This follows readily from the definition (5.16) of V (s) 1 following transf~tion: -e w' =-vw , V = -" u v' u ' u = S ' o -< w' , v' , u' < using the 1 In fact" one finds that (5.23) J21c 111 I2 I2 I2 'l(S) = s4 ( (U 12 v. 3 e- s2 /2£u {1-v (l_w )17 S 0 ~ )0 M4 s 4 + M6 s 6 + ••• = dw ' dv ' du I • • where the coefficients M4" M6" ••• may be determined by expanding the integrand in (5.23) and integrating term by term. Using Lemma 5.1" it can be shown that for large t" -( A. t) =,J2ic -r.~ 1 t - - 1 2 - (4~ ) 3 1 + t3 - 0 (- 1 ) t3 , 139 and 'R( t) = If -tl - 1 ( J21i? b4 1 t + 3 0 ( 1".2:) • t,.l For large t, we do not lose much by using the computationally more convenient form 'X(t) instead of ~(t) • Lower bound: Without going into the details of the arguments that lead to the inner bound, for which vTe refer to Bather, we just indicate the pertinent steps relevant for our case only. Following Bather" we consider here the following family of solutions satisfying (5.7) and (5.e) • Eliminating the parameter a from the two equations below obtained from the boundary conditions (5.9) and (5.10) , F~(p(t)"t) the inner bound, p(t) p(t)¢(p(t» = KI(p(t),t), is given by the equation: + .ri -J(..p(tlt7["1 ... p2(tt7 2rl + p2(t)7¢(p(t»r}(-p(t» - - - ¢(P(t» The existence and positivity of pet) _ pet) 7 1+p2(tf = 1 2t • folloWS from the positivity of the left hand side of (5.26) that is obtained by using Gordon's inequality ["29.7 on Mill's ratio" viz., 140 > equality obtaining as x _> It may be verified fram x 2 1+x x> 0 co • (5.26) that pet) is decreasing in t by showing that the numerator and the denominator of the left hand side of (5.26) are, respectively, increasing and decreasing in p • It may also be verified that pet) -> as t -> co. Fram co t -> 0 and p(t)-> 0 as '5.26) again the asymptotic form of p( t) for large t. turns out as follows: .. • Now writing (5.24) and (5.27) in terms of the original variables y (5.2) and (5.3) respectively, the bounds on the optimum given by x upper boundary for large x may be written as ..l-+ 2 x and Yl(x) - . = if ..l- rl - jX' i. whereas the exact upper boundary (4.54) from Chapter IV is r Yl () x = J21c T -1 _ 1 ~ .!x + o( xc. lrJ _7 71C 21 'lm . x + 0(T1 ) _7 x It is to be noted that the first order term is the same in all three expansions above and that Yl(x) for sufficiently large x. lies between Yl (x) and Xl(x) and 141 5.2 Trinomial problem As already noted in Chapter II, (2.100)" the optimum boundary . can be characterized with the help of the Bayes risk p(a,b"c) which satisfies in the go-region the recursion relation p(a"b"c) c p(a+l,b"c) + a+~ c p(a,b+l,c) + ---b a+ -+c p(a"b"c+l) = a+~ c + C(a,b,c) , where C(a,b,c) =) for constant cost , 1 ~ l EL ,~"1-"2 Ila" b".£7 ·e , for absolute deviation cost ; and assumes on and ou1\lside the boundary the value • Considering the two different types of normalizations for the two types of cost" viz." x y z p(x"y"z) = (a+b)!A2!3 , = (a~b)!Al!3 , 2 = (a+b+c)!A !3 = p(a" b,C)!A2 !3 ; , 2 po(x"y"z) = po(a'.b'.c)!A !3 ; for (i) constant cost" and 142 x = Y = (a+b)/A, (a_b)/Al / 2 , z = (a+b+c)/A ; = 1/3 for (ii) absolute deviation cost, using LeIIJIIla 4.3 with a case (i) and with a = 1/2 for for case (ii), and re-expressing the results of section 4.3 inter.ms of the normalized Bayes risk P(X,y,Z), we have the following~ Under the appropriate limiting tendencies for the two types of cost, p(x,y,z) satisfies the partial differentiation equation 2 0 P + 1. 0 P +! O2y Z oY z a p + 0~ P __ oX oz - -( c x,y,z ) in the region (symmetric with respect to yeo): O<X<z< Yl(x,z) ; being the unknown upper boundary surface on which bounds are desired; and , C(X y"Z) ~here = {. Jrl + Z Further 1 00 P(X'YIZ) 12,/i Z {¢(M) _M IX IX satisfies the following boundary conditions: (5.29) ~ (5.30) -p (5.31) a oy oY on yeO due to symmetry , 0 = j '\ = e_ - Po o "0 oY on the unknown boundary surface ; where -( po x,y,z ) -_?fX z • Consider:tng only the upper boundary yl(x,z), (5.30) and (5.31) may be written explicitly as (5.33) where the prime denotes differentiatIon with respect to the second argument. We now make the following transformation s =-Y t =x , f(r,s,t) (5.34) , jX .,;:. K(r,s,t) "" c(r,s,t) :;: _00 < s < 00 O<t<oo -,p,x,y,z ) p0 (x,y,z) = !x C(x>y,z) -r[r"2 { ¢(s) - /sl ~(-lsf)} =j , for case (i) r t';' {¢(s) + fs 'f~ - i(-/s' L7}, for case (ii) • It follows from (5.28), (5.29), (5.30) and (5.31) that f(r,s,t) satisfies the partial differential equation: :2) s + ~ + 2tfc(r,s,t) + ; ~ ~3 0 = in fs J < 0- (r,t) , -e 1 ~ r < G9, 0 <t < 0) , with the following boundary conditions: oa fs = 0 on s =0 due to symnetry , f{r,s,t) = K{r,s,t) whenever fs f > o-(r,t) , and ~ ; (r, rr(r,t),t) = ~ ~ (r, rr(r,t),t) = 2... r/t Again, due to symmetry, to consider only s case (i): ~ f(r,s,t) - 1(- rr{r,t» = f(r,-s,t), by (5.34) • and it is sufficient 0 • constant cost: Upper bound: Because of the particular separable form we seek solutions of the form (5.34) of K-(r,s,t) = -1- f(r,s,t) Substituting g(s) rjt (5.38) in (5.36), we see that g(s) should satisfy the ordinary differential equation g il + sg' - g =- 2.rt3/ 2 c ( r,s, t) Remembering that for this case, we are led to consider the auxiliary problem with the new cost function as where a is an arbitrary positive constant. Now proceeding as in the binomial problem, an upper bound X(r,t) on 'ir(r,t) from the following equation: From (5.40) one can easily deduce the following facts: 1. X is decreasing in r 2.. as well as in t. For a fixed r 0 > 1 , ~(ro,t) -> 0 as t -> co ~(ro,t) ~ ~ as t ---> 0 , • is obtained 146 .3. For any two fixed 1 ~ rl:S r 2 ;::(rl,t) ~ ;::(r2 "t}, 0 ' <t < co , and hence ~(r"t} 4. (5.44) ~(l, t) , < r ~ 1, 0 < t < co For large tr, an asymptotic form of ;:: • is given by = ;::(r,t} Lower bound: We consider the following family of curves satisfYing (5 •.36) and " -. (5.45) Fa(r,s"t) = .. rt *(s) + a(r) J€ e.. s2 2 / , where the :parameter a is now allowed to de:pendon 2 *(s) = s e" s /2 s S. 2 eX /2 dx r, and . • o The boundary conditions (5 •.32) and (5 •.3.3) for the family (5.45) involve the inner bound p{r,t)' as can be shown similarly to the binomial :problem, using the arguments of Bather, as follows: F~ (r, p{r,t),t) = K'(r,p(r,t),t) Eliminating the :parameter a(r) and the relation • from (5.46) and (5.47), using (5 •.34) 147 'P(r,t) is given by the following equation (1 + i Q) !(- 'ii) i) + '!r(p)/ The facts p, - 'ii ¢f;;) = P (5.41), (5.42), (5.43) stated above for as can be verified from ~ also hold for (5.48) utilizing some facts about ~ given by Bather. Also, using the asymptotic form of W(s) . given by Bather, an asymptotic expansion of p for large rt 1 . 2 1 2 372 £1 - r;r:.( 2 372) /2.r t v2n r t + is given by 1 2 0 ( 2 372) _7 · r t The asymptotic expansion of the exact boundary Yl(x,z) in obtained (4.95) for large x and hence for lage z, when written in the notation of this chapter, is given by A comparison of (5.50) with (5.49) and (5.44) gives a verification of (4.95). case (ii): Upper bound: absolute deviation cost: Equations Remembering from (5.38) and (5.39) are obtained as in case (i). (5.35) the expression for c(r,s,t) in this case, we are led, as before, to consider the auxiliary problem With the new cost function as • 148 Proceeding as usual we arrive at the same equation (5.21) defining ';;: for a fixed a (5.21) in the auxiliary problem. Replacing a in by rt" the resulting equation gives ';;:(r"t)" an upper bound on ;(r"t). We may" as in the binomial problem" obtain an upper bound ~(r"t) to X(r,t) defined by an equation like in its right hand side replaced by rt. (5.42)and (5.43) case. Fi~ally, (5.22) with the The same three f8.~ts also hold for either ';;:(r,t) t (5,,47), or ~(rJt) in this for large rt" asymptotic expansions for them may be obtained as follows: (5.51) );:(r,t) 2 2 /21c ..l r l .1: (./2ir 23t) (2:_) = ;--If' rt 2 T ~rt, 1(r)t) 1 rl = J21r rt-"b1 T Lower bound: (J2;.) 2 ~~ Cr ; 1 3 + 0(--) 7 rt - , 3 . 2 ) + 0 (~ ) _7. As i:1 the binomial problem (Sec (5.25)}" we consid.er the following family of solutions to (5.36) a~d. (5.37) , where the parameter a may now depend, as in case (i), on Eliminating a(r) r. from the following two equations, resulting from the boundar"J conditions (5.32) and (5.33) " Fa(~~ ~(r,t),t) = F~(::·.o -~(r,t)"t) = K'(r, p{r,t),t) the inner bound p(r"t) K(r, p(r,t),t) , , is given by the same equation as (5.26) inth it~ right hand, side replaced by 2(1 + i.:k·). (5.42) and (5.43) also hold for p(r,t). p(r,t) for large tr is given by The same three facts (5.41), An asymptotic expansion of -( ) ,= .}21c p r"t T 1 2 1 3 . -r1t -r1 - (-) + 0(-) rt r t -7 . Expressing the boundary Y1(X"Z)" obtained for this case in (4.107) of Chapter IV" in the notations of this chapter gives (5.53) a(r"t) = J2ic T 1 rt 11 - 7'1C (1 2 (1)4 7 4E rr-) +0 rt _. Comparing (5.53) with (5.51) and (5.52) gives an independent check on . . (4.107). 150 BIBLIOGRAP.HY £1:7 Amster" S. J ~ (1962). "A modified Bayes stopping rUle,," to appear in Annals of Mathematical Statistics. (Institute of Statistics Mimeograph Series" No. 318, Chapel Hill,) £'E.7 Anscombe, F. J. (1963). .trSequential medical trials, II Journal ~olume of the American Statistical Association, £"27 Armitage, P. (1954). 58. "Sequential tests in prophylactic and therapeutic trails," Quarterly Journal of Medicine, New Series" Volume 2;, pp. 255-274", £!!.7 Armitage, P. (1957). "Restricted sequential procedures,," Biome- trika" Volume 44, pp. 9-26. £2.7 Armitage" P. (1960). Illinois: ["§1 Arrow, K. Thomas. J., Sequential Medical Tria.ls, Springfield" Oxford: Blackwell. Blackwell, D., and Girshick, M. A., (1949). "Bayes and minimax solutions of sequential decision problems," Econometrika" Volume l7"pp. 21;..244. £17 Barnard, G. A. (1947). Biometrika, Volume .. £§J ,4" , Bather" J. A. (1962). "Significance tests for 2x2 tables," pp. 123-138. "Bayes procedures for deciding the sign of a normal mean, II Proceedings of the Cambridge Philos o;phical Society, Volume 58" pp. 599-620. £27 Blackwell, D." and Girshick, M. A. (1954). Theory of Games and Statistical Decisions, John Hiley and Sons, New York. £l9.7 B~r,J. de. (195';). "Sequential tests with three :possible " . decisions for testing an unknown parameter, ,t Applied Scientific . . . Research, HaID1e~Sect1on :a, Volume 3, Pll. 249-259' tests," Journal of the American Statistical Association, Volume 51, pp. 24.3-256. 1-19,7 Breakwell, J. and Chernoff, H. (1962). "Sequential tests for the mean of a normal distribution II (large t), tI Technical Report No •. 8.3, Applied Mathematics and Statistics Laboratories, Stanford University, Palo Alto. £1"2,7 Chernoff, H. (1956). "Large-sample theory: 1JS.rametric case," Annals of Mathematical Statistics, Volume 27, pp. 1-22. £1!±7 Chernoff, H. (1961). normal distr~bution," "Sequential tests for the mean of a Proceedings of the Fo.rth Berkeley posium on Mathematical St~tistics S~ and Probability, 1, pp. 79-91. £1:27 Cramer, H. (1946). Mathematical Methods of Statistics, Princeton University Press, Princeton. £1§J Darwin, J. H. (1959). "Note on a three decision test for com- paring two binomial proportions," Biometrika, Volume 46, pp. 106-113. £117 Erdelyi, A. et ale (195.3). Higher Transcendental Functions, Volume 1, Bateman Manuscript Project, McGraw-Hill Book Company, Inc., New York. flff/ Fisher, R. A. (1958). Statistical Methods for Research Workers, l.3th edition, (revised), Hafner Publishing Company, Inc., New York. £12,7 Girshick, M. A. (1946). "Contributions to the theory of se- quential analysis, I," Annals of Mathematical Statistics, Volume 17, pp. 12.3-14.3. £'29.7 Gordon, R. D. (1941). "Values of Mill's ratio of area to bound- ary ordinate and of the no:rma.lprobability integral for large values of the argument," Annals of Mathematical Statistics, Volume 12, pp. ;64-;66. f 2];7 Hall, W. J. (1959). ''The relationship between sufficiency and invariance with applications in sequential analysis, I," to appear in Annals of Mathematical Statistics. (Institute of Statistics Mimeograph Series, No. 228, Chapel Hill.) 12'S.,7 Hall, w. J. (1961) • 1f Some sequential tests for symmetric problems," Invited address at the Eastern Regional Meeting of the Institute of Mathematical Statistics, Ithaca, New York. f 2'd.7 Hoeffding, W. (1960). "Lower bounds for the expected sample . size and the average risk of a sequential procedure," Annals of Mathematical Statistics, Volume ;1, pp. ;52-;68. f2!7 Hoeffding, W., Lectures on Decision Theory given at the University of North Carolina, Chapel Hill, 1961. 12.27 Johnson, N. L. (1961). "Sequential analysis: a survey," Journal of the Royal Statistical Society, Series A (General) Volume 124, pp. ;72.411. f2§l Lechner, J. A. (1962). "Optimum decision procedures for a Poisson process parameter," Annals of Mathematical Statistics, Volume ;;~ pp. 1;84-1402. 1~7 Lebm&nn~ E. L. (1959). Testing Statistical Hypotheses, John Wiley and Sons, New York. £ 2§.7 Lindley, D. V. (1960). "Dynamic programming and decision theory," Applied Statistics, Volume 10, pp. ;9-51. £227 Moriguti, S. (1955). "Notes on sampling inspectitll plans," Reports 2,f S~:!21-stical ~d Appltcat:1.on Research, Union of Japanese Scientists Engineers, Volume ;, No.4, pp. 99-121. _ 1" ~3~7 Moriguti, S. and Robbins, H. (1962). "p ~ ~ II versus lip > ~ ", II "A Bayes test of Reports of Statistical Appli- cation Research, Union of JaFanese Scientists and Engineers, Volume 9, No.2, pp. 1-22. ~3]:7 Novick, M. R. (1963). "A Bayesian approach to the analysis pf data from clinical trials." Invited address at the Eastern Regional Meeting of the Institute of Mathematical Statistics, Cambridge, Massachusetts. ~3~7 Pearson, K. (1934). Tables of the Incomplete Beta-function, Cambridge University Press, England. £3}7 Raiffa, H. ~nd Schlaifer, R. (1961). Applied Statistical Decision Theory, Graduate School of Business Administration, Harvard Univers:1ty, Boston. ~3!l.7 Schwarz, G. (1962). "Asymptotic shapes of Bayes sequential testing regions," Annals of Mathematical Statistics, Volume 33, pp •. 224- ~327 Sobel" M. and Wa1d, A. (1949). "A sequential decision procedure for choosing. one of three hypotheses concerning the unknown mean of a normal distribution," Annals of Mathematical Statistics, Volume 20, pp. 502-582. ~3§1 Sobel, M. (1953). "An essentially complete class of decision functions for certain standard sequential procedures," Annals of Mathematical Statistics, Volume 24, pp. 319-337. " ~377 Ura, S. (1955). ~inimax approach to a single sampling inspec- tion by attributes," Reports of Statistical Research, Union of Japanese Scientists and Engineers, Volume 3, No.4, pp. 140-148. .!';§.7 Vogel" W. (1960). "A sequential design for the two-armed bandit," AnnalS of Mathematical Statistics, Volume ;1" pp. 4;0-44;. 1 [";27 Wa1d, A. (1947). Sequential Analysis, John Wiley and Sons, New York • .!';9~7 Wa1d, A. (1951). "Asymptotic minimax solutions of sequential point estimation problems," Proceedings of t~ Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 1-11• .!'4.Q7 Weatheri11" G. B. (1961). "Bayesian sequential analysis, " Biometrika, Volume 48" pp. 281-292. .W' -r417 - . Wilks" S. S. (1962). Sons" New York. Mathematical Statistics" John Wiley and