Download Inferential Statistics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Inferential Statistics
Statistical Analysis of Research
Data
Statistical Inference
• Getting information about a population
from a sample
• How practical are “statistically significant”
results?
– Cost/benefit
– Crucial difference
– Client acceptability
– Public and political acceptability
– Ethical and legal concerns
Inferential Statistics
• These provide a means for drawing conclusions
about a population given the data actually
obtained from the sample. They allow a
researcher to make generalizations to a larger
number of individuals, based on information
from a limited number of subjects. They are
based on:
– Probability theory
– Statistical inference
– Sampling distributions
Inferential Statistics
– Probability theory – the basis for decision-making
statistical inferences. It refers to a large number of
experiences, events or outcomes that will happen in a
population in the long run. Likelihood and chance are
similar terms. Examples are usually based on tossing
a coin and finding heads or tails. Probabilities are
statements of likelihood expressed in values from 0 to
1.0.
– p = the number of outcomes
the total possible outcomes
Inferential Statistics
– Statistical inference – statistics enable us to judge the
probability that our inferences or estimates are close
to the truth
– Sampling distributions – are theoretical distributions
developed by mathematicians to organize statistical
outcomes from various sample sizes so that we can
determine the probability of something happening by
chance in the population from which the sample was
drawn. They allow us to know the relative frequency
or probability of occurrence of any value in the
distribution.
Inferential Statistics
• Hypothesis testing – 5 basic steps
– Make a prediction
– Decide on a statistical test to use
– Select a significance level and a critical region
(region of rejection of the null hypothesis). To
do this you must consider two things
• Whether both ends (tails) of the distribution should
be included.
• How the critical region of a certain size will
contribute to Type I or Type II errors.
Critical Region in the Sampling
Distribution for a One-Tailed Test
Critical Regions in the Sampling
Distribution for a Two-Tailed Test
Levels of Significance
• Remember, if a printsout shows a twotailed test result, and you wanted a onetailed result, divide the two tailed p value
by 2.
• Example: p = .080 (two-tailed) or p>.05
•
p = .040 (one-tailed) or p<.05
• The first would not be statistically
significant, whereas, the second would be
statistically significant
Outcomes of Statistical Decision
Making
Inferential Statistics
• Hypothesis testing cont.
– Computing the test statistic - The test statistic is not a
mean, sd or any form of descriptive data. It is simply
a number that can be compared with a set of results
predicted by the sampling distribution
– Compare the test statistic to the sampling distribution
(table) and make a decision about the null hypothesis
– reject it if the statistic falls in the region of rejection.
– Consider the power of the test – its probability of
detecting a significant difference – parametric tests
are more powerful
Degrees of Freedom
• One sample t-test or paired t-test = N-1
• Independent t-test = N-2
• Chi-square test =
(# rows - 1) x (# columns – 1)
ANOVA :
df between groups = (# levels or groups – 1)
df within groups = (# subjects - # of levels)
Correlations = N-2
Levels of Measurement
• There are four levels or scales of
measurement, Each level is classified
according to certain characteristics. Data
that fall in the first level are limited to
certain statistical tests. Choices of
statistical tests (and the power of the tests)
increase as the levels go up.
Levels of Measurement cont.
• Nominal scale – measurement at its
weakest – numbers or other symbols are
used to classify or partition a class into
mutually exclusive subclasses – animals
can be classified as dogs, cats, etc. You
can test hypotheses regarding distribution
among the categories by using the Chisquare test.
Levels of Measurement cont.
• Ordinal scale – shows relationships
among classes, such as higher than, more
difficult than, etc. It allows the attributes of
a variable to be ranked in relation to each
other. A researcher can test hypotheses
using non-parametric statistics of order
and ranking.
Levels of Measurement cont.
• Interval scale – is similar to the ordinal scale, but
the distance between any two numbers is of a
known size. The numbers used have absolute
values and the interval between each number is
considered to be equal. Increasing amounts of a
variable are represented by increasing numbers
on the scale. The variable is continuous. There
is no true zero – where you have none of the
variable. All parametric tests can be used with
interval data.
Levels of Measurement cont.
• Ratio scale – is like the interval scale but it
has a true zero point as its origin. Time,
length and weight are ratio scales when
used alone, but not as a characteristic of a
person. Arithmetic, all parametric tests
and geometric means can be used with
ratio data.
Tests of Significance
• Parametric tests of significance – used if there
are at least 30 observations, the population can
be assumed to be normally distributed, variables
are at least in an interval scale
– Z tests are used with samples over 30. There are
four kinds (two samples or two categories)
– t-tests are used when samples are 30 or less.
• Single sample t-test (one sample)
• Independent t-test (two samples)
• Paired t-test (two categories
Tests of Significance
• Non-parametric tests of significance – small
numbers, can’t assume a normal distribution, or
measurement not interval
– Chi-square – requires only nominal data – allows
researcher to determine whether frequencies that
have been obtained in research differ from those that
would have been expected – use a X2 sampling
distribution
• Chi-square goodness of Fit
• Chi-Square test of independence
Tests of Significance
– Mann Whitney U – an alternate to the
independent t-test – must have at least
ordinal data. It counts the comparative ranks
of scores in two samples (from highest to
lowest) The null hypothesis is that the two
samples are randomly distributed. Use U
sampling distribution tables for small sample
sizes (1-8) and medium sample sizes (9-20)
and the Z test for large samples
Tests of Significance
– Wilcoxin Matched Pairs (signed rank test) – is
an alternate to the paired t-test. It is used for
repeated measures on the same individual. It
requires a measurement between ordinal and
interval scales – the scores must have some
real meaning. Use a T table. If the T is less
than or equal to the T in the table, the null
hypothesis is rejected.
Measures of Association
• Parametric Measures of Association – These
answer the question, “within a given
population, is there a relationship between
one variable and another variable?” A
measure of association can exist only if data
can be logically paired. It can be tested for
significance.
– Correlation – answers the question, “What is the
degree of relationship between “x” and “y” – Use
Pearson Product Moment Correlation (Pearson r )
–see next slide
Measures of Association
The Pearson Correlation Coefficient (Pearson r)
• The r examines the relationship between
two quantitative sets of scores.
• The r varies from – 1.00 to + 1.00
• The r is not a proportion and cannot be
multiplied by 100 to get a percentage.
• To think of the r as a percentage, it needs
to be converted to the “Coefficient of
Determination” or R2 . An r of .50 is 25%
better than an r of 0.00
Measures of Association
• Non-parametric tests for association
– Correlation
• The Spearman Rank Order Correlation (Rs)– “To
what extent and how strongly are two variables
related?”
• Phi coefficient – it can be used with nominal data,
but should have ordinal data
• Kendall’s Q – can be used with nominal data
Prediction
• Parametric Prediction – using a correlation, if
you know score “x”, you can predict score “y” for
one person – Use regression analysis
– Simple linear regression – allows the prediction from
one variable to another – you must have at least
interval level data
– Multiple linear regression – this allows the prediction
of one variable from several other variables. The
dependent variable must be on the interval scale
Prediction
– Non-parametric Prediction – measures the extent to
which you can reduce the error in predicting the
dependent variable as a consequence of having
some knowledge of the independent variable such as,
predicting income [DV] by education [IV]
• Kendall’s Tau – used with ordinal data and ranking - is better
than the Gamma because it takes ties into account
• Gamma - used with ordinal data to predict the rank of one
variable by knowing rank on another variable
• Lambda – can be used with nominal data – knowledge of the
IV allows one to make a better prediction of the DV than if
you had no knowledge at all
Parametric Multiple Comparisons
• The analysis of variance (ANOVA) is probably
the most commonly encountered multiple
comparison test. It compares observed
values with expected values in trying to
discover whether the means of several
populations are equal. It compares two
estimates of the population variance. One
estimate is based on variance within each
sample – within groups. The other is based
on variation across samples – between
groups. The between group variance is the
explained variance (due to the treatment) and
the variation within each group is the
unexplained variance (the error variance).
Parametric Multiple
Comparisons
– ANOVA cont. The ratio of the explained
scores to the unexplained scores gives the F
statistic. If the variance between the groups is
larger, giving an F ratio greater than 1, it may
be significant depending upon the degrees of
freedom. If the F ratio is approximately 1, it
means that the null hypothesis is supported
and there was no significant difference
between the groups.
Parametric Multiple
Comparisons
– ANOVA cont. If the null hypothesis is rejected,
then one would be interested in determining
which groups showed a significant difference.
The best way to check this is to conduct a
post hoc test such as the Tukey, Bonferrioni,
or Scheffe. (SPSS will do this for you if you
click on Post-hoc and check the test
desired.Check on descriptives while you still
in ANOVA, and the program will also give you
the mean for each group)
Parametric Multiple
Comparisons
• Two-Way Analysis of Variance
– Classifies participants in two-ways
– Results answer three questions
• Two main effects
• An interaction effect
Non-parametric Multiple Comparison
• Kruskal-Wallis Test – an alternative to the oneway ANOVA. The scores are ranked and the
analyses compare the mean rank in each group.
It determines if there is a difference between
groups.
• McNemar Test – an adaptation of the Chisquare that is used with repeated measures at
the nominal level.
• Friedman Test –an alternative to the repeated
ANOVA. Two or more measurements are taken
from the same subjects. It answers the
questions as to whether the measurement
changes over time.