* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download chance variability
Indeterminism wikipedia , lookup
History of randomness wikipedia , lookup
Inductive probability wikipedia , lookup
Probability box wikipedia , lookup
Expected utility hypothesis wikipedia , lookup
Infinite monkey theorem wikipedia , lookup
Birthday problem wikipedia , lookup
Ars Conjectandi wikipedia , lookup
Probability interpretations wikipedia , lookup
AMS 5 CHANCE VARIABILITY The Law of Averages When tossing a fair coin the chances of tails and heads are the same: 50% and 50%. So if the coin is tossed a large number of times, the number of heads and the number of tails should be approximately equal. This is the law of averages. The number of heads will be off half the number of tosses by some amount. That amount is called chance error. The chance error increases with the number of tosses in absolute terms, but it decreases in relative terms. The Law of Averages Q: A coin is tossed and you win a dollar if there are more than 60% heads. Which is better: 10 tosses or 100? A: 100 tosses is better. As the number of tosses increase you are more likely to be close to 50%, according to the law of averages. Q: Same as before, but you win one dollar if there are exactly 50% heads. A: 10 tosses is better, since in absolute terms you are more likely to be off the expected value when the number of tosses is large. Q: 100 tickets are drawn with replacement from one of two boxes: one contains two tickets with -1 and two with 1. The other contains one ticket with -1 and one with 1. One hundred tickets will be drawn at random with replacement from one of them and the amount on the ticket will be paid to you, which box do you prefer? A: In this case the expected payoff is the same for both boxes, since they both have 50% 1 and 50% -1. Random Variables Chance Processes Variables whose results depend on the outcome of a random experiment Random Variables Example 1: Toss a coin 1000 times and report the number of heads. If you repeat the experiment the number of heads will turn out differently. Example 2: The amount of money won or lost at roulette. Example 3: The percentage of Democrats in a random sample of voters. Box Model • Find an analogy between the process being studied (e.g. sampling voters) and drawing numbers at random from a box. • Connect the variability you want to know about (e.g. estimate for the Democratic vote) with the chance variability in the sum of the numbers drawn from the box. The analogy between a chance process and drawing from a box is called box model. Consider a box model for a roulette. A roulette wheel has 38 pockets. 1 through 36 are alternatively colored red and black, plus 0 and 00 which are colored green. So, there are 18 red pockets and 18 black ones. Suppose you win $1 if red comes out and loose $1 if either a black number or 0 or 00 come out. Your chance of winning is 18 to 38 and your chance of loosing is 20 to 38. A box representation is Box Model Suppose now that you bet a dollar on a single number and that, if you win, you get your $1 plus $35, but you loose your $1 if any other number comes up. A box model of this bet is given by What is your net gain after 100 plays? This correspond to the sum of 100 draws made at random with replacement from the above box. To calculate this amount we need the concept of Expected value. Expected Value and Standard Error A chance process is running. It delivers a number. Then we rerun it and it delivers another number, and so on. The numbers delivered by the process vary around the expected value, the amounts off being similar in size to the standard error. Example: Count the number of heads in 100 tosses of a coin. The expected value of the number of heads is 50. You actually toss the coin and the results are: • 57 heads, you are off by 7 • 46 heads, you are off by -4 • 47 heads, you are off by -3 The amounts off are similar in size to the standard error. The expected value and the standard error depend on the random process that generates the numbers. Expected Value and Standard Error Consider the box and draw a ticket at random with replacement 100 times. What is the expected value of the sum of the tickets? The chance of a 1 is 75% and the chance of a 5 is 25%. So we expect to see 25 x 5 + 75 x 1 = 200. Notice that this number is equal to 100 x 2 = 200 which is the number of draws times the average number in the box. As a general rule we have that the expected value is given by: Expected Value and Standard Error Q: Suppose you play Keno, a game where you win $2 and pay $1 to play. You have 1 chance in 4 to win. How much should you expect to win if you play 100 times? A: A box representation of the game is given by so the average in the box is: ($2- 3 x $1)/4=-$0.25. So you are expected to `win' -$25. Of course, if you keep playing you are expected to loose more money! Q: Consider the box and suppose 25 draws with replacement are made from the box. What is the expected sum? A: Each number should appear 1/5 of the time, that is 5 on average. So the expected value of the sum is 5 x 0 + 5 x 2 + 5 x 3 + 5 x 4 + 5 x 6 = 75 = 25 x 3. Expected Value and Standard Error In the last example we will not see each ticket appearing exactly 5 times. The actual sum we observe will be off by a chance error sum = observed value = expected value + chance error The standard error gives a measure of how large the chance error is likely to be. When drawing at random with replacement from a box we can calculate the standard error for the sum of the draws as: where SD of box stands for the standard deviation of the list of numbers in the box. Notice that the SE increases as the number of draws increases, but only by a factor equal to the square root of the number of draws. Expected Value and Standard Error Expected Value and Standard Error Can we make the previous statements more precise? Consider again the box with the five numbers. We know that in 25 draws the expected value is 75 and the SE is 10. Also, we observe that in 25 draws the sum ranges from 0 to 150 (all 0 through all 6). What are the chances that the sum will be between 50 and 100? To answer this question we observe that: 50 - 75 = -25 = -2.5 x 10 and 100 – 75 = 25 = 2.5 x 10 So that 50 and 100 are 2.5 times SEs away from the expected value. We say that 25 is 2.5 standard units. Expected Value and Standard Error We can apply the following approximation that will be better justified by the use of the Central Limit Theorem: • 68% of the draws will be within one standard unit of the expected value. • 95% of the draws will be within two standard units of the expected value. • 99% of the draws will be within 2.5 standard units of the expected values. For our example the ranges we get for one, two and 2.5 standard units are 75 ± 10, 75 ± 20 and 75 ± 25. Expected Value and Standard Error Example: Suppose there are 10,000 independent plays on a roulette wheel in a casino. Suppose all plays are of $1 on red at each play. What are the chances that the casino will win more than $250 from these plays? The box model for this problem is The expected value of the casino's net gain is the average: $20 − $18 ≈ $0.05 38 times the number of plays 10000 x $0.05 = $500. So the casino expects to win $500 a month. Expected Value and Standard Error We now need to calculate the SD. We have 20 deviations of 0.95 and 18 deviations of -1.05 from the average. So and being conservative we can approximate this number to 1. So the SE is approximately 10000 = 100. $250 is 2.5 SE units from the expected gain of $500, so we see that there is 99% chance that the net gain for the casino will be between $250 and $750. Probability Histograms Consider the box: Then, the chances of obtaining a ticket with a 1 are 4/7, the chances of a 3 are 1/7 and the chances of a 4 are 2/7. We can display that information graphically in a probability histogram. Each box is centered at a number and its area corresponds to the probability of that number. The sum of the areas of the boxes is equal to one. This histograms are used to represent chance and not the frequency of data. Probability Histograms Empirical histograms based on the frequencies of observed outcomes of an experiment converge to the corresponding probability histograms, as can be seen by the example of rolling two dice. Probability Histograms Consider now taking the product of the two dice. The convergence is also true then, although the probability histogram is much more irregular than the one obtained for the sum. The regularity is a general feature related to the sum. Probability Histograms Consider the problem of tossing a fair coin a certain number of times n. We can obtain the probability histogram for each n. We observe that the probability histogram of the number of tails converges to a very regular curve (the NORMAL) as the number of tosses is increased. Normal Approximation We can approximate the probability histogram of the sum of heads in a large number of coin tosses using the normal curve. Example: A coin is tossed 100 times, what is the probability of getting: a) exactly 50 heads? b) between 45 and 55 heads inclusive? c) between 45 and 55 heads exclusive? Solution: a) We can look at the probability histogram for this case. We observe that the chances corresponding to 50 are equal to the area of the box that has a base from 49.5 to 50.5. The area of this box is 7.96%. Normal Approximation What about an approximation using the normal curve? First step is to calculate the mean and standard deviation. Consider a box model where there is a zero for the tail and 1 for the head, We draw a ticket from this box 100 times The expected number of heads is 100 × ½ = 50 and the standard error is given by the square root law 100 × 1/ 2 = 5. Now we have to convert to standard units: 49.5 − 50 50.5 − 50 = −0.01 and = 0.01 5 5 So the normal approximation consists of the area under the normal curve for the interval (-0.1,0.1). According to the normal table, this is equal to 7.97%. Normal Approximation b) The probability of getting between 45 and 55 heads is equal to the areas of the rectangles between 45 and 55 in the probability histogram. This is approximated by the area under the normal curve for the interval (44.5,55.5). In standard units this corresponds to the interval (-1.1,1.1), which has a probability of 72.87% according to the table. c) This time the probability is given by the areas of the rectangles between 46 and 54, which is approximately the area under the curve corresponding to the interval (45.5,54.5), this is the interval (-0.9,0.9) in standard units, which has a probability of 63.19%. Very often it is not specified if the end points are included or not. In that case we consider the approximation using the given interval. So, for the previous example, we would have (45,55) that is converted to (-1,1) in standard units and yields 68.27% probability. When can we use the normal approximation Consider the box . Then the probability histogram for the tickets in the box is far from being normal. Nevertheless, if we consider the experiment of drawing tickets from the box and sum the results over and over again, then the probability histogram of the sum will be approximated by the normal curve. What if we consider the product of the tickets? In that case the probability histogram will not be approximated by a normal curve, no matter how many draws from the box we take. The Central Limit Theorem In general it is true that the probability histogram of the sum of draws from a box of tickets will be approximated by the normal curve. This is a mathematical fact that can be expressed and proved as a theorem. The reason why the CLT is used as an approximation for distributions of lists of numbers is that it often happens that the uncertainty in the data can be thought of as the sum of several sources of randomness.