Download Concepts for Week 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Degrees of freedom (statistics) wikipedia , lookup

Psychometrics wikipedia , lookup

Bootstrapping (statistics) wikipedia , lookup

Taylor's law wikipedia , lookup

Foundations of statistics wikipedia , lookup

Student's t-test wikipedia , lookup

Analysis of variance wikipedia , lookup

Resampling (statistics) wikipedia , lookup

Statistical inference wikipedia , lookup

History of statistics wikipedia , lookup

Misuse of statistics wikipedia , lookup

Transcript
Research Methodology
Course Organiser: Colin Legg
Crew Building room 121, Phone 650 5401
[email protected]
Introduction
What does 'scientifically proven' mean? How do you recognise 'rigorous scientific research'?
Did you know that in a survey of 537 papers about field ecology published in respected
scientific journals, of the 191 papers that used inferential statistics and gave details of
methodology, one third had used erroneous methods which cast serious doubt on the validity
of their conclusions *. (Presumably a much higher proportion of the remaining 346 papers
that concealed or gave inadequate details of their methodology, or did not use inferential
statistics, were also invalid!)
Can you trust the interpretation of other people's surveys and experiments?
Can other people trust your surveys and experiments?
The approach of the Research Methodology module is to consider the quality of information
and the concepts behind interpretation and the testing of hypotheses. The primary objective
is to understand what makes good survey design and a good experimental design. The course
will give you the conceptual tools you need to be able to talk to a statistician when necessary.
This is not a statistics course (there will be very few 'equations' involved beyond the very
simplest). The approach is to discuss the rationale behind commonly used sampling and
experimental methods and the assumptions made. The course is suitable for students who
have done some basic statistics, but need to understand more about experimental design and
data interpretation.
There will be tasks for you to do during the afternoons or in your own time, perhaps involving
reading some background material or planning a survey or experiment. These will form the
basis for group discussion.
*
Hurlbert, S. H. (1984). Pseudoreplication and the design of ecological field experiments. Ecological
Monographs, 54: 187-211
Course summary
This course is about the quality of information and how data are used and abused in the
scientific process. We look at the way that information mutates and evolves from original
observations of the real world through data summary and the testing of hypotheses to
publication and application. We draw up guidelines for planning effective data collection and
designing experiments and consider the critical appraisal of data interpretation presented by
others in environmental science and ecology. An understanding of the variability of nature is
a central theme and we will review the statistical methods for describing precision and for
testing hypotheses in analysis of variance and regression.
Course Aims
1.
To consider the process of science from observation through data collection, analysis,
interpretation, publication and implementation; and to adopt a critical approach to the
scientific method
2.
To be able to assess the quality of information; to identify potential sources of error
and bias and to be able to quantify precision
3.
To understand the concepts behind the testing of hypotheses
4.
To understand the principles behind the design of surveys and experiments and to take
a critical approach to data interpretation
Assessment
The course will be assessed by a survey or experimental design problem set in week 5 to be
submitted in week 7 (50%) and by exam (50%)
Course Web page:
https://www.geos.ed.ac.uk/postgraduate/MSc/ResMeth
(Course pages will migrate to WebCT shortly!)
Provisional timetable
Week
1. The quality of
Information
Class
The flow of information; errors and
degradation of information; biases in the
process of science; what makes it science?
Task
Discussion of
scientific method
2. Variability in
data
Taking a sample; estimating variability;
measuring accuracy and precision; sampling
distributions; standard deviation, standard error
and confidence intervals
Brief introduction
to Minitab
3. Sampling and
survey design
Defining objectives; how to take a
representative sample; stratification; statistical
independence of observations
An example
survey design
problem
4. Data storage
Discussion of survey design tasks. Database
design; tables, relations; queries; forms
Creating a
database in
Access
5. Asking
questions and
testing
hypotheses
Forming hypotheses; null models; test
statistics; Type I and Type II errors; the
designed experiment; analysis of variance;
assumptions.
Testing
hypotheses with
Minitab
6. Cause and
effect or
correlation
Laboratory experiments, field experiments and
natural experiments; correlation; regression;
prediction and statistical control
Correlation and
regression with
Minitab
7. Design of
experiments
Reality, generality and precision; sources of
bias; non-demonic intrusions and dispersion of
treatments
An experimental
design exercise
8. More complex Discussion of experimental design tasks.
Factorial designs; fixed and random effects;
anovas
nested designs
9. Power
analysis
Discussion of published experiments. Type II
errors revisited; monitoring and power
10. Modelling
Thought experiments and modelling; handling
complexity; scaling issues; assumptions and
sensitivity analysis
Exam
Criticism of some
published
experiments
Exam revision
Research Methodology - Concepts
Week 1
Quality of Information
Authority of information – author and source
Context – where, when
Basis for calculations and assumptions made
Biases – motives of author, methods used
Flow of information and information degradation
Sources of error
Errors of measurement – noise and bias
Sampling error, machine error, observer influence, elite data, indicators
Copying errors, selective reporting
Statistical summary
Scientific explanation:
Empirical, Rational, Testable, Parsimonious, General, Tentative, Rigorously evaluated
Presentational bias
Extrapolation
Publication – Typographic errors, Editorial control
Funding bias – Publish or Perish
Citation – Chinese whispers – dangers of Abstracting Journals
Impact – Political decisions, Misuse of data, Social responsibility of scientists
Reading
Committee on Science Engineering, and Public Policy (1995). On Being A Scientist:
Responsible Conduct In Research. National Academy of Sciences, National Academy Press.
Washington. [Available online from:
http://www.mirrorservice.org/sites/www.nap.edu/readingroom/books/obas/ ]
Week 2
Parameter estimation
Data type:
Qualitative: categorical / nominal; ordinal
Quantitative: discrete;
Continuous: interval (unconstrained); ratio (constrained)
Need for replication
Population; sample unit; sample, census
Statistics of location: mean; median
Sampling frequency distributions; normal probability distribution
Statistics of dispersion: interval estimate
Range; quartiles and inter-quartile range;
Mean deviation
Sums of squares of deviation from mean; degrees of freedom
Variance, standard deviation
Sample mean vs population mean
Standard error
t-distribution
Confidence interval
Reading
Wardlaw, A.C. (1985). Practical Statistics for Experimental Biologists. Wiley, Chichester.
Read Chapters 1 and 2 ‘How to condense the bulkiness of data’.
Quinn, G. P. & Keough, M. J. (2002). Experimental Design and Data Analysis for
Biologists. Cambridge U.P. Read sections 2.1 - 2.3.
Week 3
Survey design
Maximise precision, reality, generality; minimise bias, effort, cost
Need for clear, simple objectives, including required precision
Sample unit: natural and artificial sample units
Population: targeted and sampled populations
Sources of error: measurement error; sampling error
Precision, accuracy, bias
Representative sample; random sample
Sampling frame
Census; accessibility sample; judgmental sample
Quota sample; systematic sample
Probability sample: random sample
Stratified random sample
Non-response and volunteer individuals
Statistical independence
Confounding factors
Data management
Pilot study and mock analysis
Week 4.
Databases
Database management system (DBMS); relational database
Rectangular data table (relation); records (rows, tuples) and fields (columns);
Constraints, data types; Rules for error checking
Primary key, foreign key
Linked tables; one-to-one, one-to-many and many-to-many links
Query; dynaset
Normalisation
Form; report
Backups
Week 5
Hypothesis testing
Inductive reasoning; deductive reasoning; falsification
Model - working hypothesis - testable hypothesis - null hypothesis
Null model - test statistic - sampling frequency distribution
Significant and non-significant results; the significance threshold ()
critical values of test statistic
Type I and Type II errors
Non-parametric tests
Monte Carlo tests, e.g. sampled randomisation test
Parametric tests - ANOVA
Partitioning of variance
between-treatment variance
within treatment (error or residual) variance,
total variance
F statistic
Statistical model; Assumptions of the model
Homogeneity of variance
Normal error distribution
Independent observations
Type I error rate () and Type II error rate ()
Statistical power (1-)
Data transformation
Week 6
Correlation and regression
Sum of cross products and Covariance
Pearson’s Correlation coefficient
Bivariate normal distribution
Cause and effect relationships
Ordinary least-squares regression
The statistical model
R2 – proportion of variance explained
Uses of regression: detect causal relationship
mathematical description
prediction
statistical control
substitution of variables (calibration of indicators)
Model II regression: y-on-x, x-on-y and major axis regressions
Week 7
Experimental design
Testable hypothesis
A priori and a posteriori hypotheses - compounding of Type I errors
Data mining (data snooping)
File drawer problem
Precision, generality and reality
Manipulative laboratory and field experiments
Mensurative 'natural' experiments: snapshot and trajectory experiments
Statistical independence of observations; pseudoreplication
Interspersion of treatments - non-demonic intrusions
Control treatment; blind and double-blind procedures
BACI design (before-after-control-impact)
Pilot study and mock data analysis
Week 8
More complex ANOVA
Randomised block design
Factorial (or crossed) and nested designs
Analysis of covariance
Additive and multiplicative effects;
Interaction effects - first and second order
Fixed effects and random effects;
Model I, Model II and mixed model
Latin square and Split plot designs
Balanced and unbalanced designs
General Linear Models (GLM)
Week 9
Monitoring and Power Analysis
Surveillance and monitoring
Limit of acceptable change
Indicators, observer error
Power of test depends on:
variance, effect size, sample size, alpha rate and type of test
Statistical models and process-based models
Week 10
Modelling (Mat Williams)
Thought experiments and modelling
Handling complexity
Scaling issues
Assumptions and sensitivity analysis
Reference texts
There is no particular course text that we shall be following. There are many good books in
the Darwing Library, some very advanced, some very basic – you need to find the book
that suits you. The following is a brief guide to some those I have looked at.
Quinn, G. P. & Keough, M. J. (2002). Experimental Design and Data Analysis for
Biologists. Cambridge U.P. {Excellent coverage of the subject, but a little advanced assumes a background knowledge of statistics. Good for researchers. QH323.5 Qui}
Ruxton, G. D. & Colegrave, N. (2003). Experimental Design for the Life Sciences.
Oxford University Press, Oxford. {Excellent book on how to design experiments - well
worth purchasing at £15-00}
Greenfield, T. (2002). Research Methods for Postgraduates. 2nd ed. Arnold, London.
{Read chapters 19-27 on ‘research types’ and ‘measurement’}
Moore, D.S. (2000). Statistics: Concepts and Controversies. Freeman, New York. {Read
chapters 1-9 on data collection and 23 on significance testing}
Sokal, R. R. & Rohlf, F. J. (1995). Biometry. Freeman, San Francisco. {Expensive (ca.
£40), but good reference book on general statistics. Gives worked examples for many
of the calculations. Very little specific information on experimental design. Probably
worth buying for the person who will be doing a lot of statistical analysis in the future.}
QH323.5 Sok
Wardlaw, A. C. (1985). Practical Statistics for Experimental Biologists. Wiley,
Chichester. {Good, easy-to-understand, jargon-free text on basic statistics, but does not
go far enough into anova or experimental design.} QH323.5 War
Fowler, J. & Cohen, L. (1992). Practical Statistics for Field Biology. Wiley, Chichester.
{Simple and very clear text ideal for field ecologists with no background in statistics,
but does not go far enough into anova or multivariate analyses.} QH318.5 Fow
Underwood, A. J. (1997). Experiments in Ecology: their Logical Design and
Interpretation. {Gives a very clear explanation of some of the pitfalls of experimental
design in ecology.} QH541.24 Und
Meyer, R. K. & Kruger, D. D. (1998) A Minitab Guide to Statistics. Prentice Hall. {A
guide to statistics that uses Minitab to illustrate the examples.} HF1017 Mey
Ryan, B. F. (2000). Minitab Handbook. Pacific Grove, CA. {Reference book for use of
Mintab} QA276.4 Rya
Williams, B. (1993). Biostatistics: Concepts and Applications for Biologists. Chapman &
Hall, London. {Follows my approach quite well - covers concepts in quite simple
terms, but does not provide details on many actual statistical methods.} QH323.5 Wil.
Feinsinger, P. (2001). Designing Field Studies for Biodiversity Conservation. Nature
Conservancy, Washington. QH75 Fei. {Excellent book on survey and experimental
design for conservation}
GLOSSARY OF TERMS
Accuracy: implies 'correctness' - agreement with an agreed standard - centred on the target
Precision: implies 'consistency' - always getting the same result (regardless of whether that
result is correct) - very little variation between repeated measurements
Bias: is a consistent error in a particular direction which makes the result misrepresent the
population
Population: the total set of individuals which could conceivably be sampled, and about
which you intend to make inferences
Hypothesis: a suggested explanation for a group of facts or phenomena, either accepted as a
basis for further verification (working hypothesis) or accepted as likely to be true
Theory: a set of hypotheses related by mathematical or logical arguments to explain and
predict a wide variety of connected phenomena in general terms
Induction: a process of reasoning by which a general conclusion is drawn from an
accumulated set of observations of specific instances, based mainly on experience or
experimental evidence.
The conclusion contains more information than the
observations, but can be disproved by a further observations.
Deduction: a process of reasoning by which a specific conclusion necessarily follows from
a set of general premisses
Degrees of Freedom: the number of independent observations which can be used to
estimate a particular parameter. For example, when estimating variance or standard
deviations, you need to know the mean to estimate (x - x); if you have only one
observation then x will equal x, so there can be no deviation from the mean - there are
no degrees of freedom for variation; for two observation, (x1 - x) and (x2 - x) will be
equal and opposite, so there is only one independent observation of deviation from the
mean - there is only one degree of freedom. The degrees of freedom in this case will
therefore be (n-1) because one degree of freedom has been used to estimate the mean
which is required for the calculation of variance.
Central Limit Theorem: whatever the underlying distribution, the means of random
samples taken from that population will approach a normal distribution; a tendency
which increases with sample size. Usually, samples of 10 will give means sufficiently
near to a normal distribution for normal statistics to apply. This justifies the wide
emphasis on the normal distribution in statistics.
Sample Exam Paper
Research Methodology
Answer any two questions
Open book
1.
Explain each of the following terms and briefly discuss their importance in survey and
experimental design:
The normal probability distribution
Precision, generality and reality
Power analysis
Randomised block experimental design
2.
Discuss and contrast the advantages and disadvantages of the following sampling methods:
Accessibility sampling
Quota sampling
Systematic sampling
Random sampling
3.
You have been asked to conduct an experiment to assess the establishment rate of tree
seedlings from different planting methods. Seedlings may be either bare-root seedlings or
seedlings grown with compost in root trainers. Seedlings may be planted either into
undisturbed ground or onto the ridges of ground that has been ploughed.
Describe, with justification, how you would design the experiment, how you would lay out
the plots and how you would analyse the results.
4.
The Tar Spot fungus (Rhytisma acerinum) is a very common fungal disease on the
Sycamore tree (Acer pseudoplatanus) that causes conspicuous black spots on the leaves in
late summer. It is thought that the fungus is susceptible to atmospheric pollution and
hence is more common in rural areas than in urban areas. Describe how you would test
this hypothesis.