Download 9. Confidence Intervals and Z

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Degrees of freedom (statistics) wikipedia , lookup

History of statistics wikipedia , lookup

Bootstrapping (statistics) wikipedia , lookup

Taylor's law wikipedia , lookup

Student's t-test wikipedia , lookup

Transcript
9. Confidence Intervals and Z-Scores
We’re accumulating a lot of terminology to refer to locations or areas of distributions: standard error,
standard deviation, percentile, quartile, etc. This lab will help clarify how all these relate to one another.
We will also look at the difference between a normal population distribution and a sampling t-distribution.
Much of what is covered here progresses straight into t-tests, which we will cover in the next lab.
9.1 t-distribution (a.k.a. The Student t-distribution)

While the normal distribution typically describes our raw data, the t-distribution is the distribution of
our sample statistics (e.g. sample means). The t-distribution is very similar to the standard normal
distribution in shape however the tails of the t-distribution are generally fatter and the variance is
greater than 1.

Note that there is a different t-distribution for each sample size, in other words, it is a class of
distributions. So when we speak of a specific t-distribution, we have to specify the degrees of
freedom, where
. As our sample size increases (and degrees of freedom increase), the tdistribution approaches the shape of the normal distribution.
The normal distribution vs the t-distribution
9.2 Converting between raw data, t-values, and p-values
Percentile - the value below which a given percentage of observations within a group fall
Quartile - (1st, 2nd, 3rd, 4th) points that divide the data set into 4 equal groups, each group comprising a
quarter of the data
Alpha level - predetermined probability where we make some sort of decision
P-value - (percentiles) the probability the observed value or larger is due to random chance
Critical t-value - the t-value that corresponds to the 𝛼−𝑙𝑒𝑣𝑒𝑙
Actual t-value - the t-value that corresponds to the raw data value being tested with the 𝛼−𝑙𝑒𝑣𝑒𝑙 (signalto-noise ratio)
Signal - the difference between the test and mean values
Noise – a measure of the distribution of the data
9.3 Confidence Intervals

The following points discuss and provide examples of how we can calculate confidence intervals and
convert between the above scales in R

The functions qnorm()and pnorm()convert from units of standard deviations (= standard error in
this case) to percentiles (= probabilities) (and vice-versa) for a normal distribution. Remember,
pnorm() expects a standard deviation / error and returns a percentile / probability and qnorm()
does the opposite.
pnorm(x,mean,sd)

The command above will return the area to the left of x under the normal distribution curve with a
mean of mean and a standard deviation of sd. If you don’t specify mean or sd, R will assume a mean
of zero and an SD of 1. You can use the qnorm()statement to work things backwards from
probabilities also:
qnorm(p,mean,sd)

The command above will return the number x for which the area to the left of x under the normal
distribution curve with a mean of mean and an standard deviation of sd is equal to p. Again, if you
don’t specify mean or sd, R will assume a mean of zero and an SD of 1. That in mind, we can
calculate some basic confidence intervals with these commands.

We know that one standard error of the mean (SEx) for large sample sizes (or one standard deviation
of a normally distributed population) is equivalent to the ~68% confidence interval of the mean
because ~34% of values fall within 1 SE either side of the mean. We can confirm this with the
pnorm() command. Remember that pnorm() gives you the total area under the curve to the LEFT
of the number that you specify.
pnorm(1)
pnorm(-1)
qnorm(0.16)
qnorm(0.84)

We can also calculate the 90% and 95% confidence intervals. Remember that qnorm() returns a
value in standard deviations. Remember also that the confidence interval is spread around the mean,
which means that we must deduct HALF the unwanted area off each side:
qnorm(0.95)
qnorm(0.05)
#90% confidence interval, right side
#90% confidence interval, left side
qnorm(0.975) #95% confidence interval, right side
qnorm(0.025) #95% confidence interval, left side

If the mean and standard deviation of our sampling distribution are not 0 and 1 respectively (which is
almost always the case), we can still use the pnorm() and qnorm() commands to obtain our
confidence intervals simply by entering in our mean and standard deviation. In a distribution with a
mean of 10 and a standard deviation of 4, we can see that percentile of 6 is equal to a standard
deviation of -1:
pnorm(6,mean=10,sd=4) #Can be written more simply as pnorm(6,10,4)

Now, can you figure out how to determine, in the distribution we just tested (mean=10, sd=4), what
values lie outside the 90% and 95% confidence interval? What is the percentile of a score of 18?

When we have a very large number of samples, our sampling distribution approaches the same
shape as a normal distribution. However, when our sample size is smaller, the areas under the
distribution curve actually change. Because of this, we will normally use a Student’s T-Distribution to
calculate confidence intervals from samples. The commands are basically identical, except that we us
pt() instead of pnorm() and qt() instead of qnorm(). However, because the area under the tdistribution is sensitive to sample size, we must also specify our degrees of freedom (n-1). If our
sample size is 10, we would use qt() to determine the 95% confidence intervals and pt() to
determine the percentile for, say, a standard error of 1.5:

qt(0.025,df=9)
qt(0.975,df=9)
#Can be written more simply as qt(0.025,9)
#Can be written more simply as qt(0.975,9)
pt(1.5,df=9)
#Can be written more simply as pt(1.5,9)
To show that the t-distribution approaches the normal distribution as the sample size increases, we
can compare our values from above with a much larger sample size of 1000, then compare the output
values to those returned from pnorm() and qnorm():
qt(0.025,9)
#sample size of 10
qt(0.025,999) #sample size of 1000
qnorm(0.025) #entire theoretical population
pt(1.5,9)
pt(1.5,999)
pnorm(1.5)

Now, let’s try to calculate the 95% confidence interval of the mean for data from a previous lab for
lentil variety A (VarA). You will have to import and attach the lentil data from the previous lab.
mean(VarA)
sd(VarA)
sd(VarA)/sqrt(4)
qt(0.975,3)
#calculate the standard error of VarA
#calculate the +/- SDs of the 95% CI
Then, remembering the formula for confidence intervals:
mean(VarA) + sd(VarA)/sqrt(4) * qt(0.975,3)
mean(VarA) - sd(VarA)/sqrt(4) * qt(0.975,3)

I hear you saying “Wow, that seems like a lot of work. I mean, six lines of code? You have to be
kidding me.” Well, thankfully there is a shortcut in R. The t-test function in R (which we will work with
more in the next lab), also returns confidence intervals for a sample. You can try it out now:
t.test(VarA, conf.level=0.95) #returns the 95% confidence interval
t.test(VarA, conf.level=0.90) #returns the 90% confidence interval
9.4 Z-Score (a.k.a. Standard Score)

A z-score is a metric of where a given value fits within a distribution (a normal distribution, to be
precise) in the units of standard deviations of the distribution. To accomplish this, we need to create a
z-distribution, which is just our distribution of scores with the mean adjusted to zero and the standard
deviation adjusted to one. But in reality, we don’t need to create an entirely new distribution. We can
just adjust our one score of interest to express it as a number of standard deviations from the mean.

Z-score is calculated as the value of interest, minus the mean of the distribution, then divided by the
standard deviation of the distribution: z = (x – µ) / σ. This expresses z as a deviation from the mean,
relative to the standard deviation (in “units” of standard deviations).

We can use z-scores to compare one score to a distribution of scores. Say we have a new
unidentified lentil plant in our experiment with a yield of 690. We suspect that it belongs to Variety A,
but we are not sure. We can calculate a z-score for the new plant (since we know the mean and
standard deviation of Variety A, which we are considering a population):
mean(VarA)
sd(VarA)
z = (690 – mean(VarA)) / sd(VarA)
z

We can then use pnorm() to determine the percentile of the new lentil value (the probability that it
belongs to the population of Variety A). We will have to decide on a cutoff value that we are
comfortable with. This should be done ahead of time. This is very similar to a one-sample t-test,
which we will talk about more in Lab 11.
pnorm(z)
CHALLENGE:
1. What is the difference between a Normal distribution and a t-distribution? When are they similar?
2. When is a z-score a better metric to use then a t-score? Why do we usually use the t-scores for
statistical testing?
3. The yield of a variety of wheat was measured in a replicated and randomized experiment, and yield
nd
th
was found to be approximately normally distributed. The 2 and 98 percentile were 29 and 41 kg/ha,
respectively. What is the approximate standard deviation?