Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Probability SAMPLE SPACE AND EVENTS The set S of all possible outcomes of a given experiment is called the sample space. A particular outcome, i.e., an element in S, is called a sample point. An event A is a set of outcomes or, in other words, a subset of the sample space S. In particular, the set {a } consisting of a single sample point a ∈ S is called an elementary event. Furthermore, the empty set and S itself are subsets of S and so and S are also events; is sometimes called the impossible event or the null event. Since an event is a set, we can combine events to form new events using the various set operations: (i) A ∪ B is the event that occurs iff A occurs or B occurs (or both). (ii) A ∩ B is the event that occurs iff A occurs and B occurs. (iii) Ac, the complement of A, also written &, is the event that occurs iff A does not occur. Two events A and B are called mutually exclusive if they are disjoint, that is, if A∩ B = . In other words, A and B are mutually exclusive iff they cannot occur simultaneously. Three or more events are mutually exclusive if every two of them are mutually exclusive. EXAMPLE (a) Experiment: Toss a coin three times and observe the sequence of heads (H ) and tails (T ) that appears. The sample space consists of the following eight elements: S = {H H H , H H T , H T H , H T T , T H H , T H T , T T H , T T T } Let A be the event that two or more heads appear consecutively, and B that all the tosses are the same: A = {H H H , H H T , T H H } and B = {H H H , T T T } Then A ∩ B = {H H H } is the elementary event that only heads appear. The event that five heads appears is the empty set . (b) Experiment: Toss a (six-sided) die, pictured in Fig. 7-1(a), and observe the number (of dots) that appear on top. The sample space S consists of the six possible numbers, that is, S = {1, 2, 3, 4, 5, 6}. Let A be the event that an even number appears, B that an odd number appears, and C that a prime number appears. That is, let A = {2, 4, 6}, B = {1, 3, 5}, C = {2, 3, 5} Then A ∪ C = {2, 3, 4, 5, 6} is the event that an even or a prime number occurs. B ∩ C = {3, 5} is the event that an odd prime number occurs. Cc= {1, 4, 6} is the event that a prime number does not occur. Note that A and B are mutually exclusive: A ∩ B = cannot occur simultaneously. . In other words, an even number and an odd number (c) Experiment: Toss a coin until a head appears, and count the number of times the coin is tossed. The sample space S of this experiment is S = {1, 2, 3, . . .}. Since every positive integer is an element of S, the sample space is infinite. Remark: The sample space S in Example 7.1(c), as noted, is not finite. The theory concerning such sample spaces lies beyond the scope of this text. Thus, unless otherwise stated, all our sample spaces S shall be finite. Fig. 7-1 EXAMPLE (Pair of dice) Toss a pair of dice and record the two numbers on the top. There are six possible numbers, 1, 2, . . . , 6, on each die. Thus S consists of the pairs of numbers from 1 to 6, and hence n(S) = 36. Figure 7-1(b) shows these 36 pairs of numbers arranged in an array where the rows are labeled by the first die and the columns by the second die. Let A be the event that the sum of the two numbers is 6, and let B be the event that the largest of the two numbers is 4. That is, let A = {(1, 5), (2, 4), (3, 3), (4, 2), (5, 1)}, B = {(1, 4), (2, 4), (3, 4), (4, 4), (4, 3), (4, 2), (4, 1)} Then the event “A and B” consists of those pairs of integers whose sum is 6 and whose largest number is 4 or, in other words, the intersection of A and B. Thus A ∩ B = {(2, 4), (4, 2)} Similarly, “A or B ,” the sum is 6 or the largest is 4, shaded in Fig. 7-1(b), is the union A ∪ B . EXAMPLE (Deck of cards)A card is drawn from an ordinary deck of 52 cards which is pictured in Fig. 7-2(a). The sample space S consists of the four suits, clubs (C), diamonds (D), hearts (H ), and spades (S), where each suit contains 13 cards which are numbered 2 to 10, and jack (J ), queen (Q), king (K), and ace (A). The hearts (H ) and diamonds (D) are red cards, and the spades (S) and clubs (C) are black cards. Figure 7-2(b) pictures 52 points which represent the deck S of cards in the obvious way. Let E be the event of a picture card, or face card, that is, a Jack (J ), Queen (Q), or King (K), and let F be the event of a heart. Then E ∩ F = {J H, QH, KH }, as shaded in Fig. 7-2(b). Fig. 7-2 FINITE PROBABILITY SPACES . Definition : Let S be a finite sample space, say S = {a1, a2, . . . , an}. A finite probability space, or probability model, is obtained by assigning to each point aiin S a real number pi, called the probability of aisatisfying the following properties: (i) Each piis nonnegative, that is, pi≥ 0. (ii) The sum of the piis 1, that is, is p1+ p2+ · · · + pn= 1. The probability of an event A written P (A), is then defined to be the sum of the probabilities of the points in A. The singleton set {ai} is called an elementary event and, for notational convenience, we write for P ({ai}). P (ai) EXAMPLE (Experiment) Suppose three coins are tossed, and the number of heads is recorded. (Compare with the above Example 7.1(a).) The sample space is S = {0, 1, 2, 3}. The following assignments on the elements of S define a probability space: P (0) = 1/8, P (1) = 3/8, P (2) = 3/8, P (3) = 1/8 That is, each probability is nonnegative, and the sum of the probabilities is 1 , let A = {1, 2, 3} and B = {0, 3}. Then, by definition, P (A) = P (1) + P (2) + P (3) = 3/8+ 3/8+ 1/8= 7/8and P (B) = P (0) + P (3) = 1/8+ 1/8= 1/4 Equiprobable Spaces Frequently the physical characteristics of an experiment suggest that the various outcomes of the sample space be assigned equal probabilities. Such a finite probability space S, where each sample point has the same probability, will be called an equiprobable space. In particular, if S contains n points, then the probability of each point is 1/n. Furthermore, if an event A contains r points, then its probability is r(1/n) = r/n. In other words, where n(A) denotes the number of elements in a set A, P (A) = number of elements inA number = of elements in S n(A) n(S) nu or P (A) = number of outcomes favorable toA total number of possible outcomes Theorems on Finite Probability Spaces The following theorem follows directly from the fact that the probability of an event is the sum of the probabilities of its points. Theorem: The probability function P defined on the class of all events in a finite probability space has the following properties: [P1] For every event A, 0 ≤ P (A) ≤ 1. [P2] P (S) = 1. [P3] If events A and B are mutually exclusive, then P (A ∪ B) = P (A) + P (B). Theorem : Let A be any event. Then P (Ac) = 1 − P (A). Theorem : Consider the empty set{ } and any events A and B . Then: (i) P ({ } ) = 0. (ii) P (A\B) = P (A) − P (A ∩ B). (iii) If A ⊆ B, then P (A) ≤ P (B). Theorem (Addition Principle): For any events A and B , P (A ∪ B) = P (A) + P (B) − P (A ∩ B) EXAMPLE Suppose a student is selected at random from 100 students where 30 are taking mathematics, 20 are taking chemistry, and 10 are taking mathematics and chemistry. Find the probability p that the student is taking mathematics or chemistry. Let M = {students taking mathematics} and C = {students taking chemistry}. Since the space is equiprobable, P (M) 30=3 100 = 30/100 P (C) = 20 100 , P (M and C) = P (M ∩ C) = P (M or C) = P (M ∪ C) = P (M) + P (C) − P (M ∩ C) =0.4 10 100 1 3+1 − = 10 5 CONDITIONAL PROBABILITY , the conditional probability of A given E. written P (A|E), is defined as follows: P (A|E) = P (A ∩ E) P (E) P(E) As pictured in the Venn diagram in Fig. 7-3, P (A|E) measures, in a certain sense, the relative probability of A with respect to the reduced space E. Fig. 7-3 Suppose S is an equiprobable space, and n(A) denotes the number of elements in A. Then: n(E) n(A ∩ E P (A ∩ E) = , P (E) = n(S) , and so P (A|E) = P (A ∩ E)=n(A ∩ E) n(S) P (E) n(E) We state this result formally. Theorem: Suppose S is an equiprobable space and A and E are events. Then P (A ∩ E) P (A|E) = number of elements inA∩ E number of elements in E =n(A ∩ E) n(E) EXAMPLE (a) A pair of fair dice is tossed. The sample space S consists of the 36 ordered pairs (a, b), where a and b can be any of the integers from 1 to 6., find P (A|E) where: 36 . E = {sum is 6} and A = {2 appears on at least one die} Now E consists of 5 elements and A ∩ E consists of two elements; namely E = {(1, 5), (2, 4), (3, 3), (4, 2), (5, 1)} and A ∩ E = {(2, 4), (4, 2)} By Theorem 7.5, P (A|E) = 2/5. On the other hand A itself consists of 11 elements, that is, A = {(2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6), (1, 2), (3, 2), (4, 2), (5, 2), (6, 2)} Since S consists of 36 elements, P (A) = 11/36. (b) A couple has two children; the sample space is S = {bb, bg, gb, gg} with probability1/4 for each point. Find the probability p that both children are boys if it is known that: (i) at least one of the children is a boy; (ii) the older child is a boy. (i) Here the reduced space consists of three elements, {bb, bg, gb}; hence p = 1/3. (ii) Here the reduced space consists of only two elements {bb, bg}; hence p = 1/2. Multiplication Theorem for Conditional Probability Suppose A and B are events in a sample space S with P (A) > 0. By definition of conditional probability, P (B|A) = P (A ∩pp (B∩A) P (A) P(A) : Theorem (Multiplication Theorem for Conditional Probability): P (A ∩ B) = P (A)P (B|A) The multiplication theorem gives us a formula for the probability that events A and B both occur. It can easily be extended to three or more events A1, A2, . . . Am; that is, P (A1∩ A2∩ · · · Am) = P (A1) · P (A2|A1) · · · P (Am|A1∩ A2∩ · · · ∩ Am−1) INDEPENDENT EVENTS Events A and B in a probability space S are said to be independent if the occurrence of one of them does not influence the occurrence of the other. More specifically, B is independent of A if P (B) is the same as P (B |A). Now substituting P (B) for P (B|A) in the Multiplication Theorem P (A ∩ B) = P (A)P (B |A) yields P (A ∩ B) = P (A)P (B). . Definition: Events A and B are independent if P (A ∩ B ) = P (A)P (B); otherwise they are dependent. We emphasize that independence is a symmetric relation. In particular, the equation P (A ∩ B) = P (A)P (B) EXAMPLE implies both P (B |A) = P (B) and P (A|B) = P (A) A fair coin is tossed three times yielding the equiprobable space S = {H H H , H H T , H T H , H T T , T H H , T H T , T T H , T T T } Consider the events: A = {first toss is heads} = {H H H , H H T , H T H , H T T } B = {second toss is heads) = {H H H , H H T , T H H , T H T } C = {exactly two heads in a row} = {H H T , T H H } Clearly A and B are independent events; this fact is verified below. On the other hand, the relationship between A and C and between B and C is not obvious. We claim that A and C are independent, but that B and C are dependent. We have: P (A) = 4/8=1/2, P (B) = 4/8= 1/2, P (C) = 2/8= 1/4 Also, P (A ∩ B) = P ({H H H , H H T }) = 1/4 P (A ∩ C) = P ({H H T }) = 1/8, 2 P (B ∩ C) = P ({H H T , T H H}) =1/4 Accordingly, P (A)P (B) = 1/2· 1/2=1/4= P (A ∩ B), and so A and B are independent P (A)P (C) = 1/2· 1/4= 1/8= P (A ∩ C), and so A and C are independent P (B)P (C) = 1/2· 1/4= 1/8= P (B ∩ C), and so B and C are dependent EXAMPLE We are given that P (A) = 1/4 and P (B) = 2/5, and we seek P (A ∪ B). Furthermore, the probability that A or B hits the target is not influenced by what the other does; that is, the event that A hits the target is independent of the event that B hits the target, that is, P (A ∩ B) = P (A)P (B). Thus P (A ∪ B) = P (A) + P (B) − P (A ∩ B) = P (A) + P (B) − P (A)P (B) =1/4+ 2/5− 1/4 2/5 =11/20 INDEPENDENT REPEATED TRIALS, BINOMIAL DISTRIBUTION Definition: Let S be a finite probability space. By the space of n independent repeated trials, we mean the probability space Snconsisting of ordered n-tuples of elements of S, with the probability of an n-tuple defined to be the product of the probabilities of its components: P ((s1, s2, . . . , sn)) = P (s1)P (s2) . . . P (sn) EXAMPLE Whenever three horses a, b, and c race together, their respective probabilities of winning are 1 , S = {a, b, c} with P (a) =1/2, P (b) =1/3, and P (c) =1/6. If the horses race twice, then the sample space of the two repeated trials is S2= {aa, ab, ac, ba, bb, bc, ca, cb, cc} For notational convenience, we have written ac for the ordered pair (a, c). The probability of each point in S2is P (aa) = P (a)P (a) =(1/2)( 1/2) = 1/4, P (ba) = 1/6, P (ca) =1/12 P (ab) = P (a)P (b) =( 1/2)(1/3)= 1/6 , P (bb) = 1/9, P (cb) =1/18 P (ac) = P (a)P (c) =(1/2)(1/6)=1/12 ,P (bc) =1/18, P (cc) =1/36 Thus the probability of c winning the first race and a winning the second race is P (ca) =1/12 Repeated Trials with Two Outcomes, Bernoulli Trials, Binomial Experiment Now consider an experiment with only two outcomes. Independent repeated trials of such an experiment are called Bernoulli trials, named after the Swiss mathematician Jacob Bernoulli (1654–1705). The term independent trials means that the outcome of any trial does not depend on the previous outcomes (such as tossing a coin). We will call one of the outcomes success and the other outcome failure. Let p denote the probability of success in a Bernoulli trial, and so q = 1 − p is the probability of failure. A binomial experiment consists of a fixed number of Bernoulli trials. A binomial experiment with n trials and probability p of success will be denoted by B(n, p) = (n!)/( k!(n − k)!) pnqn-k