Download Note

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Data assimilation wikipedia , lookup

Time series wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Instrumental variables estimation wikipedia , lookup

Regression toward the mean wikipedia , lookup

Choice modelling wikipedia , lookup

Regression analysis wikipedia , lookup

Linear regression wikipedia , lookup

Coefficient of determination wikipedia , lookup

Transcript
PBAF 528
Week 2
A. More Simple Linear Regression (Least Squares Regression)
From a sample we estimate the following equation:
Ŷi  ˆ0  ˆ1Xi  ei
Ordinary Least Squares (OLS) gives estimates of model parameters.
 Minimizes the sum of the square distances between the data points
and the fitted line.
 Coefficients weight “outliers” more, since residuals are squared.
 Precision of these estimates (SE) depends on sample size (larger is
better), amount of noise (less is good), amount of variation in
explanatory factor (more is good).
 The fitted line must pass through X , Y
 ̂ 1 is the slope of the fitted line
 ̂ 0 is the intercept, the expected value of Y when X=0
PBAF 528 Spring 2005
1
from AH Studenmund. 1997 Using Econometrics: A Practical Guide
Why does the line pass through ( X , Y )?
How many lines pass through the bivariate mean?

What is the value of  y 



y  ? Why?

2



The least squares model finds the line that makes  y  y  a minimum.


PBAF 528 Spring 2005
2
Variance in Y


 y  y 



2
is the residual (unexplained) sum of squares (SSE)
(SPSS refers to this as the Residual Sum of Squares)
 
 y  y 


2
is the explained sum of squares
(SPSS refers to this as the Regression Sum of Squares)
Total Sum of Squares = Explained SS
SSyy
 y
n
i
y
+
Residual SS
= SSyy- SSE

n
2
i 1
+SSE (your book)
n
=  ŷ i  y 
+
= SSregression
+
2
i 1
SSyy
 y
i
2
 yˆ i 
i 1
SSE
The Ordinary Least Squares (OSL) model minimizes the sum of squared
errors (SSE) and therefore maximizes the explained sum of squares
Example 1: Assignment #1 part II 1(a)
B. Assumptions - Straight Line Regression Model
The straight-line regression model assumes four things:
 X and Y are linearly related
 The only randomness in Y comes from the error term, not from
uncertainty about X
 The errors, ε, are normally distributed with mean 0 and variance 2
 The errors associated with various datapoints are uncorrelated (not
related to each other, independent)
PBAF 528 Spring 2005
3
C. Estimator of σ²
σ² measures the variability of the random error, ε.
As σ² increases the larger the error in the prediction of y using ŷ
s² is an estimate of σ²
s² = SSE
df
= SSE
n–2
s is the estimated standard error of the regression model
s measures the spread of the distribution of y values about the least
squares line
most observations should be within 2s of the least squares line
D. Interpreting results
Goodness of Fit – Assessing the Utility of the Model
1. Coefficient of Determination
The smaller SSE is relative to SSyy, the better the regression line appears to
fit (we are explaining more of the variance).
We can measure “fit” by the ratio of the explained sum of squares to the total
sum of squares. This ratio is called R2.
2


 x  y  
y   ˆ 
n

2

2
y 
 1   xy 



y

ŷ


 i i
n 
n
SS  SSE


SSE

R 2  yy
 1
 1  i 1n
 1 
2
2
SS yy
SS yy
 y i 
yi - y

y i2 

i 1
n


 R2 must be between 0 and 1.
 The higher the R2 the better the fit—the closer the regression equation
is to the sample data. It explains the proportion of variation explained
by the whole model.
 R2 close to 1 shows an excellent fit.
PBAF 528 Spring 2005
4
 R2 close to 0 shows that the regression equation isn’t explaining the Y
values any better than they’d be explained by assuming no relationship
with X’s.
 OLS (that is, minimizing the squared errors) gives us ’s that minimize
RSS (keeps the residuals as small as possible) thus gives largest R2
 There is no simple method for deciding how high R2 has to be for a
satisfactory and useful fit.
 Cannot use R2 to compare models with different dependent variables
and different n
 R2 is just one part of assessing the quality of the fit. Underlying theory,
experience, usefulness all are important
Example 2: Assignment #1 Part III 1(a)
Note: The relationship between the correlation coefficient and
coefficient of determination

For a simple regression r2 (the square of the simple correlation
coefficient, r) is approximately equal to R2 (the fraction of variability in
Y explained by regression).
Sampling distribution of parameter estimates
 
E ˆ1   1
 
The expected value of a coefficient is the true value
of that coefficient.
SD ˆ1  SE ˆ  s ˆ
1
The standard deviation of the estimate is the
1
standard error.
Precision of estimates (SE) depends on:

Randomness in outcomes (s2) (less is better)

Size of sample (more is better)

Variation in explanatory variable, X (more is better)
Errors are normally distributed (an assumption of the model); therefore, so
are the parameter estimates. So, t-tests, confidence intervals, and p-values
work for coefficient estimates.
PBAF 528 Spring 2005
5
2. Hypothesis test of one coefficient, 1
Step 1: Set up hypothesis about true coefficient
H0: 1=0
Ha: 10
Step 2: Find test statistic
Tells us how many standard errors away from zero the coefficient is.
ˆ1   H 0
t
SE ˆ
1
SE usually obtained from SPSS or Excel.
Can Calculate SE:
SS yy  ˆ1SS xy
SE ˆ 
1
s

SS xx
n  k 1
SS xx

2


 y 2   y    ˆ  xy   x  y  

1 


n 
n




n  k 1
2




x

2
 x 



n 


Step 3: Find critical value
tα (the critical t) can be found in the t table with n-k-1 degrees of freedom
(n=sample size, k=number of explanatory variables)
Step 4: If |t|>tα then reject H0
Problems with hypothesis tests
a) Type I Error
Reject the null when the null is true. The p-value tells us the chance of
making this type of error
b) Type II Error
We fail to reject the null when the null is false. The chances of making this
type of error decreases with larger sample sizes, with larger standard errors
of the parameter estimate, and with the size of the true parameter.
Example 3: Assignment #1 Part III 2(a)
PBAF 528 Spring 2005
6
3. Confidence Interval for parameter estimate
A (1-)% confidence interval for the true coefficient (the slope or  on some
predictor X) is given by: ˆ1  t1 2,df SE ˆ , where we use t at n-k-1 degrees of
1
freedom.
We can be (1 – α)•100% confident that the true slope is between these
values.
P-value
Probability that you would get an estimate so far (in SEs) from H0 if H0 were
true.

p-values give the level of support for H0

you can look up t-statistic in t or z table (or use Excel) to find the
probability in the tail.

If you select α = 0.05, then you would reject H0 if p < 0.05
Example 4: Assignment #1 Part III 2 (c)
The Research Process--Points to Address in a Research Proposal
In the proposal, make sure you respond to each of these points.
1. Formulate a question/problem
2. Review the literature and develop a theoretical model
 What else has been done on this? What theories address it?
3. Specify the model: independent and dependent variables
4. Hypothesize about the expected signs of the coefficients
 Translate your question into specific hypotheses
5. Collect the data/operationalize
 all variables have same number of observations
 unit of analysis (person, month, year, household)
 degrees of freedom (at least 1 more than the number of
parameters—more is better)
6. Analysis
 In the case of regression, estimate and evaluate the equation
 In the proposal, discuss what analyses will you undertake.
PBAF 528 Spring 2005
7