Download One-way independent-measures ANOVA, Graham Hole, March 2009

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 1:
One-way Independent-measures Analysis of Variance (ANOVA):
What is "Analysis of Variance"?
Analysis of Variance, or ANOVA for short, is a whole family of statistical tests that are
widely used by psychologists. This handout will
(a) explain the advantages of using ANOVA;
(b) describe the rationale behind how ANOVA works;
(c) explain, step-by-step, how to do a simple ANOVA.
Why use ANOVA?
ANOVA is most often used when you have an experiment in which there are a number of
groups or conditions, and you want to see if there are any statistically significant differences
between them. Suppose we were interested in the effects of caffeine on memory. We could look
at this experimentally as follows. We could have four different groups of participants, and give
each group a different dosage of caffeine. Group A might get no caffeine (and hence act as a
control group against which to compare the others); group B might get one milligram of caffeine;
group C five milligrams; and group D ten milligrams. We could then give each participant a
memory test, and thus get a score for each participant. Here's the data you might obtain:
Group A
(0 mg)
4
3
5
6
2
mean = 4
Group B
(1 mg)
7
9
10
11
8
mean = 9
Group C
(5 mg)
11
15
13
11
10
mean = 12
Group D
(10 mg)
14
12
10
15
14
mean = 13
How would we analyse these data? Looking at the means, it appears that caffeine has
affected memory test scores. It looks as if the more caffeine that's consumed, the higher the
memory score (although this trend tails off with higher doses of caffeine). What statistical test
could we use to see if our groups truly differed in terms of their performance on our memory test?
We could perform lots of independent-measures t-tests, in order to compare each group with
every other. So, we could do a t-test to compare group A with group B; another t-test to compare
group A with group C; yet another to compare group A with group D; and so on. The problem
with this is that you would end up doing a lot of t-tests on the same data. With four groups, you
would have to do six-tests to compare each group with every other one:
A with B, A with C, A with D;
B with C, B with D;
C with D.
With five groups you would have to do ten tests, and with six groups, fifteen tests! The
problem with doing lots of tests on the same data like this, is that you run an increased risk of
getting a "significant" result purely by chance: a so-called "Type 1" error.
Revision of Type 1 and Type 2 Errors:
Remember that every time you do a statistical test, you run the risk of making one of two
kinds of error:
(a) "Type 1" error: deciding there is a real difference between your experimental
conditions when in fact the difference has arisen merely by chance. In statistical jargon, this is
known as rejecting the null hypothesis (that there's no difference between your groups) when in
fact it is true. (You might also see this referred to as an "alpha" error).
(b) "Type 2" error: deciding that the difference between conditions is merely due to
chance, when in fact it's a real difference. In the jargon, this is known as accepting the null
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 2:
hypothesis (that there's non difference between your groups) when in fact it is false. (You might
see this referred to as a "beta" error).
The chances of making one or other of these errors are always with us, every time we
run an experiment. If you try to reduce the risks of making one type of error, you increase the risk
of making the other. For example, if you decide to be very cautious, and only accept a difference
between groups as "real" when it is a very large difference, you will reduce your risk of making a
type 1 error (accepting a difference as real when it's really just due to random fluctuations in
performance). However, because you are being so cautious, you will increase your chances of
making a type 2 error (dismissing a difference between groups as being due to random variation
in performance, when in fact it is a genuine difference). Similarly, if you decide to be incautious,
and decide that you will regard even very small differences between groups as being "real" ones,
then you will reduce your chances of making a type 2 error (i.e., you won't often discount a real
difference as being due to chance), but you will probably make lots of type 1 errors (lots of the
differences you accept as "real" will have arisen merely by chance).
The conventional significance level of 0.05 represents a generally-accepted trade-off
between the chances of making these two kinds of errors. If we do a statistical test, and the
results are significant at the 0.05 level, what we are really saying is this: we are prepared to
regard the difference between groups that has given rise to this result as being a real difference,
even though, roughly five times in a hundred, such a result could arise merely by chance. The
0.05 refers to our chances of making a type 1 error.
ANOVA and the Type 1 error:
Hopefully, you should be able to see why doing lots of tests on the same data is a bad
idea. Every time you do a test, you run the risk of making a type 1 error. The more tests you do
on the same data, the more chance you have of obtaining a spuriously "significant" result. If you
do a hundred tests, five of them are likely to give you "significant" results that are actually due to
chance fluctuations in performance between the groups in your experiment. It's a bit like playing
Russian Roulette: pull the trigger once, and you are quite likely to get away with it, but the more
times you pull the trigger, the more likely you are to end up blowing your head off! (The results of
making a type 1 error in a psychology experiment are a little less messy, admittedly).
One of the main advantages of ANOVA is that it enables us to compare lots of groups all
at once, without inflating our chances of making a type 1 error. Doing an ANOVA is rather like
doing lots of t-tests all at once, but without the statistical disadvantages of doing so. (In fact,
ANOVA and the t-test are closely related tests in many ways).
Other advantages of ANOVA:
(a) ANOVA enables us to test for trends in the data:
Looking at the mean scores in our caffeine data, it looks as if there is a trend: the more
caffeine consumed, the better the memory performance. We can use ANOVA to see if trends in
the data like this are "real" (i.e., unlikely to have arisen by chance). We don't have to confine
ourselves to seeing if there is a simple "linear" trend in the data, either: we can test for more
complicated trends (such as performance increasing and then decreasing, or performance
increasing and then flattening off, etc.) This is second-year stuff, however...
(b) ANOVA enables us to look at the effects of more than one independent variable
at a time:
So far in your statistics education, you have only looked at the effects of one independent
variable at a time. Moreover, you have largely been limited to making comparisons between just
two levels of one IV. For example, you might use a t-test or a Mann-Whitney test to compare the
memory performance of males and females (two levels of a single independent variable, "sex").
The only tests you have covered that enable you to compare more than two groups at a time are
the Friedman and Kruskal-Wallis tests, but even these only enable you to deal with one IV at a
time. The real power of ANOVA is that it enables you to look at the effects of more than one IV in
a single experiment. So, for example, instead of just looking at "the effects on memory of caffeine
dosage" (one IV, one DV), you could look at "sex differences in the effects on memory of caffeine
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 3:
dosage" (two IV's, one DV) or even "age and sex differences in the effects on memory of caffeine
dosage (three IV's and one DV)!
ANOVA enables you to see how these independent variables interact, enabling you to do
more sophisticated experiments. Instead of merely asking whether caffeine dosage affects
memory, we can ask questions such as: do the effects of caffeine on memory differ for men and
women? It might be, for example, that as the caffeine dosage increases, memory performance is
increased, but that this trend is more pronounced for women than it is for men. ANOVA enables
us to test for these kinds of complicated interactions between variables. Again, this is beyond the
scope of RM1 - something to look forward to in the second year!
An overview of how ANOVA works:
Analysis of variance does exactly what its name implies: it breaks down the variation
present within your set of scores, and works out how much of it is due to your experimental
manipulations (what you did to the participants) and how much of it is due to random variation
that has nothing to do with your experimental manipulations (variation that's due to individual
differences between participants). ANOVA then compares these two sources of variation. There
are two possible outcomes:
(a) your experimental manipulation has produced a lot of variation in the scores,
compared to the amount of random variation that would have existed in the set of scores anyway.
Your experimental manipulation has produced an effect on participants' performance.
(b) your experimental manipulation has not produced much variation in scores, compared
to the random variation that is present anyway. Your experimental manipulation has not produced
any discernible effects on participants' performance.
A concrete example of the logic behind ANOVA:
Imagine a fictional world, in which all participants were identical in every respect before
they did an experiment. You could give them a memory test, and all participants would produce
exactly the same score.
Suppose you took four groups of participants from this mob and did something different
to each group: for example, you could give them different doses of caffeine. In this ideal world, a
group of participants would all respond in exactly the same way to whatever was done to them.
Therefore, everyone within a group receiving no caffeine produces a memory score of 4;
everyone within a group receiving 1 mg of caffeine produces a memory score of 9; everyone
within a group receiving 5 mg of caffeine gives a score of 12; and everyone in the group receiving
10 mg gives a score of 13..
In this ideal world, we now have variation in our scores: some participants give a score of
4; others give a score of 9; others give a score of 12; and some give a score of 13. All of the
variation in our obtained scores comes from the effects of our experimental manipulation (caffeine
dosage). None of the variation comes from random differences between our participants. The
effects of our experimental manipulation (caffeine dosage) are clear-cut. We would be able to
conclude that caffeine dosage affects memory test performance.
In the real world, participants don't behave like this (which is why we need statistics in the
first place!). For various reasons, the scores that we obtain from doing our experiment will vary
from each other. Firstly, participants will perform differently from each other, even before we do
anything to them in our experiment: for example, some participants might naturally have a good
memory, others might have a bad memory. Secondly, participants will also respond differently to
the effects of our experimental manipulations: for example, some people might be greatly affected
by caffeine, while others might be only mildly affected by it.
Because of these factors, if we do the caffeine experiment in real life, the set of scores
that we end up with will probably show some variation even if our experimental manipulation has
had no effect. Our experiment has to produce an effect on performance (i.e., variation in scores)
over and above this pre-existent random variation. In terms of the caffeine experiment, some of
the variation in scores comes from the effects of caffeine, and some of the variation comes from
random differences between participants that are unrelated to our experiment.
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 4:
Systematic versus random variation:
How can we distinguish the variation in the set of scores that's due to our experimental
manipulation, from the variation in the scores that's produced by random differences between
participants? In principle, the answer is simple: individual differences in performance are by their
nature fairly random, and are therefore not likely to vary consistently between different groups in
the experiment. However, the effects of our experimental manipulation should be consistently
different between one group and another, because that is how we have administered them:
everyone within a single group gets the same treatment from us.
Consider the scores in the table. They all vary, both within a particular group and also
between groups. Variation in scores within a group can't be due to what we did to the
participants, as we did exactly the same thing to all participants within a group. If there is variation
within a group, it must be due to random factors outside our control.
Variation between groups can, in principle, occur for two reasons: because of what we
did to the participants (our experimental manipulations) and/or because of random variation
between participants (by chance, we might happen to have more people with good memories in
group D than we do in group A). However as long as we take care to ensure that the only
systematic difference between groups is due to our experimental manipulation, because the
variation between participants is due to random factors, it is unlikely to produce systematic effects
on performance: it is unlikely to make one group perform consistently better or worse than
another. Consistent (systematic) variation between the groups is more likely to be due to what we
did to the participants: i.e., due to our experimental manipulation.
In short, variation within the groups of an experiment is due to random factors. Variation
between the groups of an experiment can occur because of both random factors and the
influence of our experimental manipulations; however, it is only likely to be large if it is due to the
latter, as this is the only thing which varies systematically between the groups. Therefore, all we
have to do is work out how much variation there is in a set of scores; find out how much of it
comes from differences within groups; and find out how much of it comes from differences
between groups. If the between-groups variation is large compared to the within-groups variation,
we can be reasonably sure that our experimental manipulation has affected participants'
performance.
Total variation amongst
a set of scores
=
between-groups variation
+
within-groups
variation
To compare the size of the between-groups variation to the within-groups variation, we
simply divide one by the other: the larger the between-groups variation compared to the withingroups variation, the larger the number that will result. We will then look up this number (called an
F-ratio) in a table to see how likely it is to have occurred by chance - in much the same way as
you look up the results of a t-test, for example.
How we do all this in practice:
How we do this in practice is to assess variation within and between groups by using a
statistical measure based on the variance of the scores. You have encountered the variance
before, as an intermediate step in working out the standard deviation of a set of scores.
(Remember that the standard deviation is a measure of the average amount of variation amongst
a set of scores - it tells you how much scores are spread out around their mean. The variance is
the standard deviation squared).
The main reason we use the variance rather than the standard deviation is that it makes
the arithmetic easier. (Variances can be added together, whereas standard deviations can't,
because of the square-rooting that is added into the s.d. formula in order to return the s.d. to the
same units as the original scores and their mean).
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 5:
Here is the formula for the variance:
∑ (X − X )
2
var iance
=
N
In English, this means do the following:
Take a set of scores (e.g. one of the groups from the table), and find their mean.
Find the difference between each of the scores and the mean.
Square each of these differences (because otherwise they will add up to zero).
Add up the squared differences.
Normally, you would then divide this sum by the number of scores, N, in order to get an
average deviation of the scores from the group mean - i.e., the variance. However, in ANOVA, we
will want to take into account the number of participants and number of groups we have.
Therefore, in practice we will only use the top line of the variance formula (called the "Sum of
Squares", or "SS" for short):
Sum of Squares
=
∑ (X − X )
2
We will divide this not by the number of scores, but by the appropriate "degrees of freedom"
(which is usually the number of groups or participants minus 1). More details on this below.
Earlier, I said that the total variation amongst a set of scores consisted of betweengroups variation plus within-groups variation. Another way of expressing this is to say that the
total sums of squares can be broken down into the between-groups sums of squares, and the
within-groups sums of squares. What we have to do is to work these out, and then see how large
the between-groups sums of squares is in relation to the within-groups sums of squares, once
we've taken the number of participants and number of groups into account by using the
appropriate degrees of freedom.
Step-by-step example of a One-way Independent-Measures ANOVA:
As mentioned earlier, there are lots of different types of ANOVA. The following example
will show you how to perform a one-way independent-measures ANOVA. You use this where you
have the following:
(a) one independent variable (which is why it's called "one-way");
(b) one dependent variable (you get only one score from each participant);
(c) each participant participates in only one condition in the experiment (i.e., they are
used as a participant only once).
A one-way independent-measures ANOVA is equivalent to an independent-measures ttest, except that you have more than two groups of participants. (You can have as many groups
of participants as you like in theory: the term "one-way" refers to the fact that you have only one
independent variable, and not to the number of levels of that IV). Another way of looking at it is to
say that it is a parametric equivalent of the Kruskal-Wallis test.
Although some statistics books manage to make hand-calculation of ANOVA look scary,
it's actually quite simple. However, since it is so quick and easy to use SPSS to do the work, I'm
just going to give you an overview of what SPSS works out and why.
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 6:
Total SS:
This shows us how much variation there is between all of the scores, regardless of which
group they belong to.
Total degrees of freedom:
This is the total number of scores minus 1. In our example, total d.f. = 20 - 1 = 19.
Between-Groups SS:
This is a measure of how much variation exists between the groups in our experiment.
Between-Groups degrees of freedom:
This is the number of groups minus 1. We have four groups, so we our between-groups
d.f. = 3.
Within-Groups SS:
This tells us how much variation exists within each of our experimental groups.
Within-Groups degrees of freedom:
This is obtained by taking the number of scores in group A minus 1, and adding this
number to the number of scores in group B minus 1, and so on. Here, we have five scores in
each group, and so the within-groups d.f. = 4 + 4 + 4 + 4 = 16
Arithmetic check:
Note that the between-groups SS and the within-groups SS add up to the total SS.
Essentially, we break down the total SS into its two components (between-groups variation and
within-groups variation), so these two combined cannot come to more than the total SS!
Similarly, the within-groups d.f. added to the between-groups d.f. must equal the total d.f..
Here, 3 + 16 = 19, so again we are okay.
If the numbers don't add up correctly, something is very wrong!
Mean Squares:
As mentioned earlier, we are going to compare the amount of variation between groups
to that existing within groups, but in order to do this, we need to take into account the number of
scores on which each sums of squares is based. This is where the degrees of freedom that we
have been calculating come into play. We (well, SPSS anyway!) need to work out things called
"Mean Squares" ("MS" for short), which are like "average" amounts of variation. Dividing the
between-groups SS by the between-groups d.f., produces the "Between Groups Mean Squares".
Dividing the within-groups SS by the within-groups d.f. gives us the "Within-Groups Mean
Squares".
Between-groups MS = Between-groups SS / Between-groups d.f.
Within-groups MS = Within-groups SS / Within-groups d.f.
F-ratio:
Now we need to see how large the between-groups variation is in relation to the withingroups variation. We do this by dividing the between-groups MS by the within-groups MS. The
result is called an F-ratio.
F = between-groups MS / within-groups MS
The ANOVA summary table:
The results of an Analysis of Variance are often displayed in the form of a summary table.
Here's the table for our current example:
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 7:
Source:
SS
d.f
MS
F
Between groups
245.00
3
81.67
25.13
Within groups
51.98
16
3.25
Total
297.00
19
Different statistics packages may display the results in a different way, but most of these
principal details will be there somewhere. The really important bit is the following:
Assessing the significance of the F-ratio:
The bigger the value of the F-ratio, the less likely it is to have arisen merely by chance.
How do you decide whether it's "big"? You consult a table of "critical values of F". (There's one on
my website). If your value of F is equal to or larger than the value in the table, it is unlikely to have
arisen by chance. To find the correct table value against which to compare your obtained F-ratio,
you use the between-groups and within-groups d.f.. In the present example, we need to look up
the critical F-value for 3 and 16 d.f. Here is an extract from a table of critical F-values, for a
significance level of 0.05:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1
161.4
18.51
10.13
7.71
6.61
5.99
5.59
5.32
5.12
4.96
4.84
4.75
4.67
4.60
4.54
4.49
4.45
2
199.5
19.00
9.55
6.94
5.79
5.14
4.74
4.46
4.26
4.10
3.98
3.89
3.81
3.74
3.68
3.63
3.20
3
215.7
19.16
9.28
6.59
5.41
4.76
4.35
4.07
3.86
3.71
3.59
3.49
3.41
3.34
3.29
3.24
3.20
4
224.6
19.25
9.12
6.39
5.19
4.53
4.12
3.84
3.63
3.48
3.36
3.26
3.18
3.11
3.06
3.01
2.96
Treat the between-groups d.f. and the within-groups d.f. as coordinates: we have 3
between-groups d.f. and 16 within-groups d.f., so we go along 3 columns in the table, and down
16 rows. At the intersection of these coordinates is the critical value of F that we seek: with our
particular combination of d.f, values of F as large as this one or larger, are likely to occur by
chance with a probability of 0.05 - i.e., less than 5 times in a 100. Therefore, if our obtained value
of F is equal to or larger than this critical value of F in the table, our obtained value must have a
similarly low probability of having occurred by chance.
In short, we compare our obtained F to the appropriate critical value of F obtained from
the table: if our obtained value is equal to or larger than the critical value, there is a significant
difference between the conditions in our experiment. On the other hand, if our obtained value is
smaller than the critical value, then there is no significant difference between the conditions in our
experiment.
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 8:
(Different tables may display these critical values of F in different ways: the table above
shows only critical values for a 0.05 significance level, but often tables will show critical values for
a 0.01 significance level as well (or even higher levels of significance, such as 0.001).
If you are using SPSS or Excel to do the dirty work, then you won't need to use this table:
the statistical package gives you the exact probability of obtaining your particular value of F by
chance. Thus, for example, SPSS might say that the probability was "0.036" or somesuch. Some
people report these exact probabilities; others merely round them to the nearest conventional
figure, so that instead of reporting 0.036, they would simply say: p<0.05).
Interpreting the Results:
If we obtain a significant F-ratio, all it tells us is that there is a statistically-significant
difference between our experimental conditions. In our example, it tells us that caffeine dosage
does make a difference to memory performance. However, this is all that the ANOVA does: it
doesn't say where the difference comes from. For example, in our caffeine example, it might be
that group A (0 mg caffeine) was different from all the other groups; or it might be that all the
groups are different from each other; or groups A and B might be similar, but different from
groups C and D; and so on. Usually, looking at the means for the different conditions can help
you to work out what is going on. For example, in our example, it seems fairly clear that increased
levels of caffeine lead to increased memory performance (although we wouldn't be too confident
in saying that groups C and D differed from each other).
In many cases, a significant ANOVA would be followed by further statistical analysis,
using "planned comparisons or "post hoc tests", in order to determine which differences between
groups have given rise to the overall result.
SPSS output and what it means:
Here is the output from SPSS for the current example. (Click on "analyze data", then
"compare means" and finally "one-way ANOVA").
Under "options", I selected "descriptive statistics" and "homogeneity of variance". The former
gives me a mean and standard deviation for each condition, essential for interpreting the results
of the ANOVA. The latter performs a "Levene's test" on the data, to test whether or not the
conditions show homogeneity of variance (i.e. whether the spread of scores is roughly similar in
all conditions, one of the requirements for performing a parametric test like ANOVA). If Levene's
test is NOT significant, then you are OK: you can assume that the data show homogeneity of
variance. If Levene's test is statistically significant (i.e. its significance is 0.05 or less), then the
spread of scores is NOT similar across the different conditions: the data thus violate one of the
requirements for performing ANOVA, and you should perhaps consider using a non-parametric
test instead, such as the Kruskal-Wallis test. Here, Levene's test is not significant, so we are OK
to use ANOVA.
Descriptives
SCORE
N
0 mg
1 mg
5 mg
10 mg
Total
5
5
5
5
20
Mean
4.0000
9.0000
12.0000
13.0000
9.5000
Std. Deviation
1.58114
1.58114
2.00000
2.00000
3.95368
Std. Error
.70711
.70711
.89443
.89443
.88407
95% Confidence Interval for
Mean
Lower Bound Upper Bound
2.0368
5.9632
7.0368
10.9632
9.5167
14.4833
10.5167
15.4833
7.6496
11.3504
Minimum
2.00
7.00
10.00
10.00
2.00
Maximum
6.00
11.00
15.00
15.00
15.00
Research Skills: One-way independent-measures ANOVA, Graham Hole, March 2009: Page 9:
Test of Homogeneity of Variances
SCORE
Levene
Statistic
.356
df1
df2
3
Sig.
.786
16
Here's the ANOVA summary table. Our F-ratio is shown as having a significance level of ".000".
SPSS can only show numbers to four significant digits, so this means that our F-ratio is significant
at p<.0005. In other words, the difference between our groups is highly unlikely to have occurred
by chance. It is more plausible to assume that caffeine dosage has significantly affected peoples'
memories.
ANOVA
SCORE
Between Groups
Within Groups
Total
Sum of
Squares
245.000
52.000
297.000
df
3
16
19
Mean Square
81.667
3.250
F
25.128
Sig.
.000
To pinpoint exactly where the difference between our groups lies, we can perform "post hoc
tests". There is a large number of these, varying in terms of the types of comparison we want to
make, how conservative they are, etc. A popular post hoc test is the Bonferroni test, so I selected
that, using the "post hoc..." option on the SPSS ANOVA dialog box. The output compares each
group with every other group, so there is a fair degree of redundancy in the table! The important
columns are the first and fourth. We have a significant difference between the 0 mg and 1 mg
groups (p = .003), the 0 mg and 5 mg groups (shown as p = .000), the 0 mg and 10 mg groups
(shown as p = .000), and between the 1 mg and 10 mg groups (p = .017). All other comparisons
between groups are non-significant (i.e. p> .05). Thus it looks as if any caffeine improves memory
performance compared to taking none at all, and 10 mg produces better effects than 1 mg.
Multiple Comparisons
Dependent Variable: SCORE
Bonferroni
(I) level of caffeine
0 mg
1 mg
5 mg
10 mg
(J) level of caffeine
1 mg
5 mg
10 mg
0 mg
5 mg
10 mg
0 mg
1 mg
10 mg
0 mg
1 mg
5 mg
Mean
Difference
(I-J)
-5.0000*
-8.0000*
-9.0000*
5.0000*
-3.0000
-4.0000*
8.0000*
3.0000
-1.0000
9.0000*
4.0000*
1.0000
*. The mean difference is significant at the .05 level.
Std. Error
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
1.14018
Sig.
.003
.000
.000
.003
.109
.017
.000
.109
1.000
.000
.017
1.000
95% Confidence Interval
Lower Bound Upper Bound
-8.4300
-1.5700
-11.4300
-4.5700
-12.4300
-5.5700
1.5700
8.4300
-6.4300
.4300
-7.4300
-.5700
4.5700
11.4300
-.4300
6.4300
-4.4300
2.4300
5.5700
12.4300
.5700
7.4300
-2.4300
4.4300