<Dd> s 2 = ε ^ T ε ^ n − p = (M y) T M y n − p = y T M T M y n − p = y T M y n − p = S (β ^) n − p, σ ^ 2 = n − p n s 2 (\ displaystyle s ^ (2) = (\ frac ((\ hat (\ varepsilon)) ^ (\ mathrm (T)) (\ hat (\ varepsilon))) (n-p)) = (\ frac ((My) ^ (\ mathrm (T)) My) (n-p)) = (\ frac (y ^ (\ mathrm (T)) M ^ (\ mathrm (T)) My) (n-p)) = (\ frac (y ^ (\ mathrm (T)) My) (n-p)) = (\ frac (S ((\ hat (\ beta)))) (n-p)), \ qquad (\ hat (\ sigma)) ^ (2) = (\ frac (n-p) (n)) \; s ^ (2)) </Dd> <P> The numerator, n − p, is the statistical degrees of freedom . The first quantity, s, is the OLS estimate for σ, whereas the second, σ ^ 2 (\ displaystyle \ scriptstyle (\ hat (\ sigma)) ^ (2)), is the MLE estimate for σ . The two estimators are quite similar in large samples; the first one is always unbiased, while the second is biased but minimizes the mean squared error of the estimator . In practice s is used more often, since it is more convenient for the hypothesis testing . The square root of s is called the standard error of the regression (SER), or standard error of the equation (SEE). </P> <P> It is common to assess the goodness - of - fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto X . The coefficient of determination R is defined as a ratio of "explained" variance to the "total" variance of the dependent variable y: </P> <Dl> <Dd> R 2 = ∑ (y ^ i − y _̄) 2 ∑ (y i − y _̄) 2 = y T P T L P y y T L y = 1 − y T M y y T L y = 1 − S S R T S S (\ displaystyle R ^ (2) = (\ frac (\ sum ((\ hat (y)) _ (i) - (\ overline (y))) ^ (2)) (\ sum (y_ (i) - (\ overline (y))) ^ (2))) = (\ frac (y ^ (\ mathrm (T)) P ^ (\ mathrm (T)) LPy) (y ^ (\ mathrm (T)) Ly)) = 1 - (\ frac (y ^ (\ mathrm (T)) My) (y ^ (\ mathrm (T)) Ly)) = 1 - (\ frac (\ rm (SSR)) (\ rm (TSS)))) </Dd> </Dl>

Which of the following is an assumption made regarding the residuals in least squares regression