Download 14324171

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
PSYCHOMETRIC PROPERTIES
OF QUANTITATIVE MEASURES
Reliability and Validity of Test Measures
Deciding what test to use





Psychometric Properties
Ease of Administration
Training Needed
Cost
Appropriateness for your Population
Definitions

Reliabilityconsistent,
reproducible,
dependable

Validitymeasures
what it says it
measures
Reliability
Measurement Error
Reliability Coefficients
Types of Reliability
Test-retest
Rater: Intra & Inter
Internal Consistency
Measurement Error

Observed score = true score + error
Measurement Error = true score – observed score

Reliability estimates measurement error

Sources of Measurement Error





Systematic- consistently wrong in the same amount
Random- chance
rater
measuring instrument
variability in what you are measuring
Reliability Coefficients
True score variance
True score variance + error variance





as error decreases, coefficient increases
coefficient ranges from .00 to 1.00
< .50 poor
.50 to .75 moderate
> .75 good
Types of Reliability: Test-Retest



Get the same results every time you use the test
Intervals between testing- long enough to avoid
fatigue or remembering the answers but not so long
that natural maturation occurs
Intraclass correlation coefficient (ICC) or Pearson r
Another way to Test- Retest


Alternate forms: different versions covering the
same content (SAT, GRE)
r correlation coefficient is used (.8 or higher)
Types of Reliability: Internal Consistency




Are all questions measuring the same thing?
Split-half: correlation of two halves of same test
(odds and evens) Spearman-Brown prophecy
Cronbach’s alpha: essentially an average of the all
the possible split-half reliabilities, can be used on
multiple choice.
When used on dichotomous scores, called KuderRichardson 20
Types of Reliability: Raters


Intra – rater: stability of one rater across trials
Inter-rater: consistency between two raters
Use ICC
for both
types
Validity
Validity vs. Reliability
Types of Validity
Validity vs. Reliability
not valid
not reliable
valid?
not reliable
not valid
reliable
valid
reliable
Generalizability


External validity- the test is valid if used with the
intended population
The test is valid if used in appropriate context and
as directed for its given purpose
Face Validity



appears to test what it is supposed to measure
weakest form
ok for ROM, length, observation of ADLs
Content Validity




covers the entire range of the variable and reflects
the relative importance of each part
based on expert opinion, needs to be free of
cultural bias
Test of function- 20 questions on brushing your
teeth, 1 question each on mobility, bathing, dressing
VAS vs. McGill Pain Questionnaire
Criterion-related Validity
target test compared to gold standard
concurrent

target test is taken at
the same time as
another test with
established validity
predictive

examines whether the
target test can predict
a criterion variable
Construct Validity



ability of a test to measure a construct
based on a theoretical framework
what would you include for a test on “wellness”?
Ways to establish construct validity




Known groups
Convergent comparison
Divergent comparison
Factor analysis
Remember

The reliability and validity of a test measurement is
not the same thing as reliability and validity of a
research design.
Where do I find this information?
lsustudent.pbworks.com/Psy Assessments
Journals
Books
Test Manuals