Download RESEARCH METHODS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
RESEARCH METHODS
Psychology 290
Study Guide
Unit I:
The Scientific Method
Lesson I-1:
Science as a Way of Knowing
Objectives:
a) Describe five ways of knowing.
b) Identify the contribution of Galileo.
Key terms:
Tenacity
Authority
Reason
Common sense
Science
Operational definitions
Replication
Lesson I-2:
The Scientific Approach and Early Approaches
Objectives:
a) Name one major characteristic of science.
b) Understand Newton’s rules of reasoning.
c) Describe the contributions of Galen and Semmelweis to the history of science.
Lesson I-3:
Studying Behavior and Experience
Objectives:
a) Give examples of three levels at which behavior and experience can be studied.
b) Distinguish between behavior and experience.
c) Name four ways of studying psychological processes.
d) Identify the assumption on which the study of experience rests.
Key terms:
Empiricism
Marker variable
Lesson I-4:
Naturalistic Observation and the Correlational Approach
Objectives:
a) Identify three steps of the scientific process.
b) Name three scientific techniques used in psychology.
c) Give examples of naturalistic observation and the correlational approach.
d) Explain the sentence “Correlation does not imply causality”.
1
Key terms:
Naturalistic observation
Correlational approach
Experimental method
Post-hoc methods
Qualitative methods
Quantitative methods
Lesson I-5:
The Experimental Method
Objectives:
a) Give an example of the experimental method.
b) Describe the components of the experimental method.
c) Explain what is meant by causation.
d) Describe two exploratory ways to use experimental methods.
Key terms:
Hypothesis
Experimental group
Control group
Operational definition
Independent variable
Dependent variable
Treatment effect
Confounding variable
Exploratory research
Lesson I-6:
Logic and Inference
Objectives:
a) Briefly discuss the role of logic in science.
b) Describe two types of validity.
c) Identify two logically valid and invalid arguments.
d) Explain Popper’s ideas about falsification.
Key terms:
Internal validity
External validity
Generalizability
Deduction
Induction
Antecedent
Consequent
Modus ponens
Affirming the consequent
Denying the antecedent
Modus tollens
Falsificationism
PRACTICE TEST I-A
2
Lesson I-7:
Evaluation of Scientific Research
Objectives:
a) Explain Kuhn’s theory of paradigm shifts.
b) Name three criteria for determining if a finding is worth reporting to others.
c) Identify four ways to insure the quality of research.
Key terms:
Paradigm
Lesson I-8:
Making Our Hypothesis Concrete and Logical
Objectives:
a) Explain how operational definitions can be used to make our hypotheses
concrete.
b) Describe how induction and deduction can be used to generate hypotheses.
c) Identify the four steps in Platt’s process of strong inference.
Key terms:
Construct validity
Strong inference
Lesson I-9:
Creating Testable Research Hypotheses
Objectives:
a) Distinguish between reliability and validity.
b) Describe four steps required prior to experimentation.
Key terms:
Reliability
Lesson I-10: Ideas, Intuition and Revelation in Science
Objectives:
a) Identify four steps involved in formulating a research hypothesis
b) Give an example of the role of revelation in scientific research
c) Name Wallas’ four stages of scientific problem-solving
Key terms:
Preparation
Incubation
Illumination
Verification
Lesson I-11: Tools for Library Research
Objectives:
a) Name four sources of information and ideas.
b) Identify six major journals in the field of psychology.
c) Identify four major indexes that cover psychological research.
d) Use PsycInfo and visit at least one of the Web sites listed on pp. 79-79.
Lesson I-12: Scales of Measurement
Objectives:
a) Describe four levels or scales of measurement.
3
Key terms:
Nominal
Ordinal
Interval
Ratio
Frequency distribution
y-axis (ordinate)
x-axis (abscissa)
Bar graph
Frequency polygon
Bimodal distribution
Normal distribution
Skewed distribution
PRACTICE TEST I-B
Lesson I-13: Descriptive Statistics
Objectives:
a) Define three measures of central tendency.
b) Define three measures of variability.
Key terms:
Central tendency
Mean
Median
Mode
Variability
Range
Variance
Sum of squares
Standard deviation
Lesson I-14: Graphing and Transformation of Data
Objectives:
a) Name two ways of graphing data.
b) Give two reasons why transformations of data are used.
c) Explain what a z score is and why it is used.
Key Terms:
z score
Lesson I-15: Correlation
Objectives:
a) Draw graphs of a positive, negative and zero correlation.
b) Explain what r2 represents.
Key terms:
Scatter diagram
Correlation
4
Pearson r
Lesson I-16: Probability
Objectives:
a) Explain what p<.001 means.
Key terms:
Inferential statistics
Gambler’s fallacy
Lesson I-17: The Normal Distribution and Hypothesis Testing
Objectives:
a) Indicate the percentage of individuals in a normal distribution that are within
one and two standard deviations of the mean.
b) Describe the Central Limit Theorem.
c) Explain how hypotheses are tested statistically using the null hypothesis.
Key terms:
Central Limit Theorem
Standard error of the mean
Confidence intervals
Null hypothesis
Lesson I-18: Examples of Inferential Statistics: The t-Test
Objectives:
a) Explain how a t-test works.
b) Distinguish between sample statistics and population parameters.
c) Describe the difference between a one-tailed and two-tailed test.
Key terms:
Degrees of freedom
Population parameters
One-tailed test
Two-tailed test
PRACTICE TEST I-C
Unit II:
Basic Research Issues
Lesson II-1: The Context of Experimentation and Types of Variation
Objectives:
a) Describing the three stages in the interpretation of experimental results.
b) Identify three types of variation.
Key terms:
F-ratio
Confound
5
Lesson II-2: Statistical Hypothesis Testing
Objectives:
a) Distinguish between Type I and Type II errors.
b) Explain what the numerator and denominator of an F-ratio represent.
Key terms:
Type I error
Type II error
Alpha level
Beta level
Power
Lesson II-3: Threats to Internal Validity
Objectives:
a) Identify nine threats to internal validity.
b) Describe four steps in the experimentation process.
Key terms:
History
Maturation
Testing
Instrumentation
Statistical regression
Selection
Mortality
Selection-maturation interaction
Diffusion or imitation of treatments
Lesson II-4: Control Achieved through Participant Assignment
Objectives:
a) Identify the three hypotheses that must be considered in interpreting
experimental results.
b) Name two ways that an experimenter can determine the influence of one
variable on another.
Key terms:
Random selection
Random number table
Elimination procedure
Equating (matching) procedure
Lesson II-5: Randomization and Experimental Design
Objectives:
a) Describe the function of randomization.
b) Identify and discuss two kinds of randomization.
c) Distinguish between pretest and posttest.
d) Diagram a posttest-only control-group design.
e) Diagram a pretest-posttest control-group design.
f) Describe the problem that the Solomon four-group design helps prevent.
6
g) Identify the four steps shown in diagrams of experimental designs (p. 167).
Key terms:
Randomization
Random sampling
Random assignment
Representative sample
Pretest
Posttest
Lesson II-6: Between-Subjects Design Terminology and Completely Randomized Designs
Objectives:
a) Distinguish between within-subjects and between-subjects designs.
b) Describe a completely randomized design.
Key terms:
Factorial design
Between-subjects design
Completely randomized design
PRACTICE TEST II-A
Lesson II-7: Multileveled Completely Randomized Designs
Objectives:
a) Distinguish between two types of completely randomized designs; simple and
multileveled.
Key terms:
Multileveled completely randomized design
Analysis of variance (ANOVA)
Lesson II-8: Factorial Designs
Objectives:
a) Explain how a 2 x 2 factorial design works.
b) Distinguish between main effects and interaction effects in a factorial design.
Key terms:
Factorial design
2 x 2 factorial design
Main effects
Interaction effects
7
Lesson II-9: The Logic of Experimentation in Factorial Designs
Objectives:
a) Identify the three independent variables and the dependent variable in the
example shown in your course reader.
b) Give an example of an interaction effect.
c) Explain what it means to say that two independent variables interact.
d) Know what each column of an ANOVA table represents.
Key terms:
df
SS
MS
F
p
Lesson II-10: Eight Possible Outcomes
Objectives:
a) Understand and be able to interpret the outcomes shown in your course reader.
Lesson II-11: Subject Variables and Advantages of Factorial Designs
Objectives:
a) Explain how the introduction of a subject variable weakens a factorial design.
b) Name three advantages of factorial designs.
Key terms:
Subject variable
Lesson II-12: Within-Subjects Designs (Part 1)
Objectives:
a) Explain how an experiment can be carried out as either a between-subjects or a
within-subjects design.
b) Indicate the effect that a within-subjects design can have on the F-ratio.
c) Identify the function of intrasubject counterbalancing.
Key terms:
Intrasubject counterbalancing
PRACTICE TEST II-B
Lesson II-13: Within-Subjects Designs (Part 2)
Objectives:
a) Identify three advantages and two disadvantages of within-subjects designs.
b) Explain how intragroup counterbalancing can control for order effects.
c) Distinguish between intrasubject and intragroup counterbalancing.
Key terms:
Order effects
Fatigue effects
Practice effects
Intragroup counterbalancing
8
Incomplete counterbalancing
Repeated measures
Lesson II-14: Mixed Designs and Matched-Subjects Procedures
Objectives:
a) Give an example of a mixed design.
b) Explain how a matched design works.
c) Describe two ways that matching can be used.
Key terms:
Mixed design
Matched design
Lesson II-15: Ecology
Objectives:
a) Distinguish between the Leipzig model and the Paris model.
b) Explain what is meant by “the ecology of the psychological experiment”.
Key terms:
Ecology
Ecological validity
Lesson II-16: Experimenter Factors
Objectives:
a) Describe two types of experimenter effects.
b) Name at least two ways to avoid experimenter bias.
Key terms:
Experimenter effects
Experimenter bias
Interrater reliability
Lesson II-17: Subject Factors
Objectives:
a) Describe two types of subject factors.
Key terms:
Hawthorne effect
Placebo effects
Double-blind experiment
Demand characteristics
Lesson II-18: Cultural and Social Bias
Objectives:
a) Give two examples of how a change in paradigm can influence research.
PRACTICE TEST II-C
Unit III: Advanced Research Issues
9
Lesson III-1: Closed and Open Systems
Objectives:
a) Distinguish between closed systems and open systems.
Key terms:
External validity
Archival research
Quasi-experimental designs
Correlational designs
Lesson III-2: Quasi-Experimental Designs (Part 1)
Objectives:
a) Describe a time series design.
b) Identify the strengths and weaknesses of interrupted and multiple time series
designs.
Key terms:
Time series design
Single-group, pretest-posttest design
Interrupted time series design
Multiple time series design
Lesson III-3: Quasi-Experimental Designs (Part 2)
Objectives:
a) Identify the strengths and weaknesses of nonequivalent before-after and
retrospective designs.
Key terms:
Nonequivalent before-after design
Retrospective design
Ex post facto design
Lesson III-4: Correlational Procedures
Objectives:
a) Describe the difference between a correlational design and an experimental
design.
a) Give an example of a correlational study.
b) Explain the following statement: “Correlation does not imply causality”.
Key terms:
Correlational study
Third-variable problem
Lesson III-5: Naturalistic Observations
Objectives:
a) Describe Rosenhan’s naturalistic study of mental hospitals.
b) Identify three problems related to data collection in naturalistic studies.
c) Name at least two advantages and two disadvantages of naturalistic
observation.
10
Key terms:
Reactive behavior
Unobtrusive observation
Selective perception
Lesson III-6: Types of Single-Subject Designs
Objectives:
a) Identify the distinguishing feature shared by single-subject and small-N
designs.
b) Give at least three examples from the field of psychology of single-subject
designs.
c) Name two purposes that single-subject designs can serve.
PRACTICE TEST III-A
Lesson III-7: Case Study Designs
Objectives:
a) Give an example of a case study.
Key terms:
Case study
One-shot case study
Lesson III-8: Experimental Single-Subject Designs (Part 1)
Objectives:
a) Distinguish intrasubject and intersubject replication.
b) Explain how a reversal design works.
Key terms:
Intrasubject replication
Reversal design
Lesson III-9: Experimental Single-Subject Designs (Part 2)
Objectives:
a) Explain how multiple-baseline designs work and give an example.
b) Explain how multielement designs work and give an example.
Key terms:
Multiple-baseline single-subject design
Multielement design
Lesson III-10: Introduction to Survey Research
Objectives:
a) Explain the purpose of survey research.
b) Give an example of survey research.
c) Identify the nine steps for designing a survey research project.
Lesson III-11: Question Construction and Formats
Objectives:
a) Identify one advantage and one disadvantage of an open-ended question.
11
b) Give an example of an open-ended and a closed-ended question.
c) List at least three guidelines for constructing an effective questionnaire.
Key terms:
Open-ended question
Closed-ended question
Likert scale
Semantic differential scale
Random response method
Lesson III-12:Methods of Administering Data
Objectives:
a) List one advantage and disadvantage of face-to-face interviews, telephone
interviews, and mail questionnaires.
b) Describe the Nisbett and Wilson findings concerning introspection.
PRACTICE TEST III-B
Lesson III-13:Sampling
Objectives:
a) Distinguish between probability and nonprobability sampling.
b) Describe five types of probability sampling.
c) Describe three types of nonprobability sampling.
Key terms:
Probability sampling
Simple random sampling
Systematic sampling
Stratified random sampling
Cluster sampling
Multistage sampling
Nonprobability sampling
Convenience sampling
Quota sampling
Lesson III-14: Sample Size
Objectives:
a) Explain how one estimates the sample size needed in a survey.
Key terms:
Confidence interval
Sampling error
Lesson III-15:Preparing Your Article: Abstract and Introduction
Objectives:
a) Give an overview of what a scientific article should include.
b) Describe the function and characteristics of the abstract and introduction.
Key terms:
Abstract
12
Introduction
Lesson III-16: Preparing Your Article: Method and Discussion Sections
Objectives:
a) Describe the function and characteristics of the method and discussion sections.
Key terms:
Method section
Results section
Lesson III-17: Preparing Your Article: Discussion and References
Objectives:
a) Describe the function and characteristics of the discussion and references.
Key terms:
Discussion
References
Lesson III-18: Publishing Your Article and Deciding What Makes a Good Article
Objectives:
a) Briefly describe the process by which scientific articles get published.
b) List the questions that should be answered in preparing a journal article.
Key terms:
Peer review
PRACTICE TEST III-C
13
Practice Test I-A
1. In the scientific method, a formally stated expectation concerning the outcome of an
experiment is called
a) theory.
b) induction.
c) a hypothesis.
d) empiricism.
e) a marker variable.
2. If we were interested in understanding the association between two variables, we would use
a) the experimental method.
b) naturalistic observation.
c) authority.
d) modeling.
e) the correlational approach.
3. An experiment is externally valid if
a) the external variables are operationally defined.
b) the results generalize to other settings.
c) the results are attributed only to external variables.
d) the outcome is due to the independent variable only.
e) the outcome occurs outdoors.
4. If a child learned that a stove was hot by touching it, his/her knowledge was gained through
a) science.
b) common sense.
c) tenacity.
d) authority.
e) reason.
5. Which of the following do not represent two features of the scientific approach?
a) Observation and explanation
b) Empiricism and theory
c) Experimentation and reason
d) Experience and belief
e) Experience and observation
6. Science seeks to understand internal experience through
a) direct observation.
b) replication.
c) inference from behavior.
d) introspection.
e) common sense.
14
Practice Test I-B
1. If you have an article that was published in 1986 and you want to know whether any more
recent articles have referenced this work, you should consult
a) PsycInfo.
b) Psychological Abstracts.
c) Index Medicus.
d) Social Science Citation Index.
e) MEDLINE.
2. When a frequency distribution is constructed, the ____________ appears on the y-axis.
a) number of individuals who make each response
b) responses of the participants
c) independent variable
d) abscissa
e) bimodal distribution
3. The notion that science advances through paradigm shifts is attributed to
a) Karl Popper.
b) Galileo.
c) Graham Wallas.
d) Isaac Newton.
e) Thomas Kuhn.
4. In the hypothesis, "People feel more depressed in the winter," ________________ is the
independent variable and ________________ is the dependent variable.
a) winter; depression
b) depression; season of the year
c) season of the year; depression
d) depression; winter
e) summer; winter
5. Weight is measured on a(n) ___________ scale.
a) nominal
b) interval
c) ratio
d) ordinal
e) median
6. To make a hypothesis testable, you must
a) formulate a broad general question.
b) operationally define your variables.
c) use inductive reasoning.
d) have construct validity.
e) rely on modus tollens.
15
Practice Test I-C
1. If you hypothesized that two groups would be different, then the appropriate statistical test is
a) directional.
b) not significant.
c) one-tailed.
d) significant.
e) two-tailed.
2. The middle score in a set of scores is called the
a) mean.
b) median.
c) mode.
d) arithmetic average.
e) central tendency.
3. In the normal distribution, the percentage of scores that fall within two standard deviations of
the mean is approximately
a) 34%
b) 50%
c) 68%
d) 95%
e) 100%
4. The square root of the variance is called the
a) squared deviation.
b) average squared deviation.
c) sum of the squared deviations.
d) the standard deviation.
e) range.
5. To compare data collected on two different scales, you should
a) transform the data.
b) represent the data graphically.
c) average the data .
d) calculate correlation coefficients.
e) graph the data on a scatter diagram.
6. If an experimental report states that results are significant,
p < .05, this means that if the experiment was repeated you would expect
a) to obtain the same result less than 5 times in 100.
b) the result to occur by chance less than 95 times in 100.
c) the result to occur by chance less than 5 times in 100.
d) to obtain the same result more than 50% of the time.
e) to obtain the same result at least 5% of the time.
16
Practice Test II-A
1. The fact that research participants may become fatigued or bored in the course of an
experiment introduces a confound called
a) maturation.
b) history.
c) mortality.
d) statistical regression.
e) instrumentation.
2. Assigning equal numbers of men and women to each experimental condition is an example of
a) random assignment.
b) counterbalancing.
c) proportional assignment.
d) an equating procedure.
e) random selection.
3. A confounding variable adds ________________ to our experimental results.
a) nonsystematic variance
b) controlled variance
c) systematic variance
d) error variance
e) standard error
4. Factorial designs must have
a) more than two levels of the dependent variable.
b) more than one dependent variable.
c) more than two levels of the independent variable.
d) more than one independent variable.
e) All of the above.
5. In a Solomon four-group design.
a) participants are not randomly assigned.
b) not all participants receive the pretest.
c) not all participants receive the posttest.
d) all participants are assigned to groups using an elimination procedure.
e) all participants are assigned to groups using an equating procedure.
6. If we reject the null hypothesis when it is true, we make a
a) Type I error.
b) Type II error.
c) correct rejection.
d) correct acceptance.
e) t-test.
17
Practice Test II-B
1. In an analysis of variance summary table, MS equals
a) F/df.
b) F/N.
c) SS/df.
d) SS/N.
e) SS/F.
2. In within-subjects designs,
a) each individual receives all levels of the independent variable.
b) each individual contributes more than one measure of the dependent variable.
c) different levels of the independent variable are given to the same group of research
participants.
d) All of the above.
3. In between-subjects designs,
a) there are more than two levels of the independent variable.
b) each individual receives all levels of the independent variable.
c) different levels of the independent variable are presented to the same group of participants.
d) there is more than one independent variable.
e) each individual receives only one level of the independent variable.
4. If in a factorial experiment there is an interaction,
a) there will be at least one main effect.
b) there will be two main effects.
c) there will no main effects.
d) any of the above may occur.
5. In a 2 X 2 factorial design, you would calculate __________ F ratios.
a) 1
b) 2
c) 3
d) 4
e) 5
6. A within-subjects design is more sensitive than a between-subjects design because
a) there are fewer participants.
b) there is less error variance.
c) there are fewer degrees of freedom.
d) there are fewer F ratios.
e) All of the above.
18
Practice Test II-C
1. If a treatment has a permanent effect on people, then it is inappropriate to use a
_____________ design.
a) between-subjects
b) within-subjects
c) matched-subjects
d) completely randomized
e) double-blind
2. Cultural biases are introduced into research through the acceptance of certain shared world
views or approaches to science, which are called
a) experimental methods.
b) correlational techniques.
c) demand characteristics.
d) hypotheses.
e) paradigms.
3. Research participants' performance may be altered as a result of special attention rather than
because of any effect of the independent variable. This is called
a) the Hawthorne effect.
b) the placebo effect.
c) demand characteristics.
d) a cultural effect.
e) experimenter bias.
4. The consistency of observations across different individuals is evaluated by using
a) internal validity.
b) correlation.
c) interrater reliability.
d) predictive validity.
e) external validity.
5. Ecology of the psychological experiment is concerned with the interactions among
a) scientist, participant, and witness.
b) scientist, participant, and context.
c) context, culture, and environment.
d) two or more independent variables.
e) culture, witness, and participant.
6. Matched-subjects procedures reduce error variance
a) more than between-subjects designs.
b) more than within-subjects designs.
c) less than between-subjects designs.
d) as much as within-subjects designs.
e) as much as between-subjects designs.
19
Practice Test III-A
1. If we want to make comparisons between two groups that we know are different at the start of
the experiment, the design to use is
a) nonequivalent before-after.
b) multiple time-series.
c) ex post facto.
d) interrupted time-series.
e) retrospective.
2. If a participant's behavior is influenced by the presence of an observer, this is called
a) responsive behavior.
b) reactive behavior.
c) obtrusive behavior.
d) selective perception.
e) the third-variable problem.
3. Which of the following applies to single-subject designs?
a) The ability to control variables is critical in establishing a relation between the independent
and dependent variables.
b) The designs rely on probability theory and inferential statistics.
c) Individual differences become error variance.
d) Sampling procedures are emphasized.
e) Large samples are preferred to small samples.
4. To discover whether two variables are associated (but not necessarily causally), the type of
research you would do is
a) experimental.
b) correlational.
c) observational.
d) unobtrusive.
e) controlled
5. A time-series design is
a) a within-subjects design.
b) a between-subjects design.
c) a mixed design.
d) a matched design.
e) none of the above.
6. An environment in which important variables can be controlled
a) is called an open system.
b) is called a closed system.
c) has high external validity.
d) has low internal validity.
e) has high ecological validity.
20
Practice Test III-B
1. To investigate long-lasting effects of an independent variable with a single individual, you
should use
a) a reversal design.
b) a multiple baseline design.
c) a multielement design.
d) a naturalistic case study.
e) nonequivalent before-after design.
2. Open-ended questions in a survey
a) are limited to yes and no answers.
b) provide quantitative data.
c) impose the researchers' point of view on a survey.
d) make the data easier to analyze.
e) provide researchers with information that they had not considered before.
3. The most cost-effective way to administer a large survey is
a) face to face.
b) by phone.
c) by mail.
d) by computer.
e) in the lab.
4. The first step in survey research is
a) to state specific hypotheses.
b) to decide on the population to be surveyed.
c) to specify the broad objectives of the research.
d) to construct clear and unambiguous questions.
e) to decide how the data are to be entered into the computer.
5. In single-subject research, reliability can be assessed through
a) intrasubject replication.
b) multiple dependent variables.
c) one-shot case studies.
d) averaging results from many subjects.
e) intersubject replication.
6. The case study is useful for
a) working with large samples.
b) confirming a research hypothesis.
c) allowing us to make strong inferences.
d) studying rare phenomena.
e) establishing ecological validity.
21
Practice Test III-C
1. The brief 100-150 word summary that appears before the body of a scientific article is called
the
a) introduction.
b) synopsis.
c) abstract.
d) summation.
e) discussion.
2. The idea that science is a shared activity is by no means new. More than 2000 years ago,
Aristotle emphasized that science had two parts:
a) inquiry and argument.
b) theoretical and applied.
c) statistical significance and scientific significance.
d) exploration and summary.
e) prediction and control.
3. When articles are submitted for publication, they are usually sent by the journal editor to other
scientists familiar with the topic. This process is referred to as
a) summary review.
b) peer review.
c) evaluation.
d) proof reading.
e) scientific copyediting.
4. Which of the following is not included in the discussion section?
a) Main results of the experiment
b) Limitations of the experiment
c) Statistical significance of the results
d) Ways that your experiment relates to other similar experiments
e) The implications of the results
5. The sampling procedure in which you choose every fifth person in the population is called
a) systematic sampling.
b) stratified random sampling.
c) cluster sampling.
d) random sampling.
e) convenience sampling.
6. The section of the paper that describes exactly how an experiment was conducted is called the
a) introduction
b) method
c) abstract
d) discussion
e) technique
22
Answer Key
Practice Test I-A
1.
c
2.
e
3.
b
4.
a
5.
d
6.
c
4.
5.
6.
c
b
a
Practice Test III-A
1.
a
2.
b
3.
a
4.
b
5.
a
6.
b
Practice Test I-B
1.
d
2.
a
3.
e
4.
c
5.
c
6.
b
Practice Test III-B
1.
b
2.
e
3.
c
4.
c
5.
a
6.
d
Practice Test I-C
1.
e
2.
b
3.
d
4.
d
5.
a
6.
c
Practice Test III-C
1.
c
2.
a
3.
b
4.
c
5.
a
6.
b
Practice Test II-A
1.
a
2.
d
3.
c
4.
d
5.
b
6.
a
Practice Test II-B
1.
c
2.
d
3.
e
4.
d
5.
c
6.
b
Practice Test II-C
1.
b
2.
e
3.
a
23