<Dd> SSE = ∑ i = 1 n (X i − X _̄) 2 + ∑ i = 1 n (Y i − Y _̄) 2 + ∑ i = 1 n (Z i − Z _̄) 2 (\ displaystyle (\ text (SSE)) = \ sum _ (i = 1) ^ (n) (X_ (i) - (\ bar (X))) ^ (2) + \ sum _ (i = 1) ^ (n) (Y_ (i) - (\ bar (Y))) ^ (2) + \ sum _ (i = 1) ^ (n) (Z_ (i) - (\ bar (Z))) ^ (2)) </Dd> <P> with 3 (n − 1) degrees of freedom . Of course, introductory books on ANOVA usually state formulae without showing the vectors, but it is this underlying geometry that gives rise to SS formulae, and shows how to unambiguously determine the degrees of freedom in any given situation . </P> <P> Under the null hypothesis of no difference between population means (and assuming that standard ANOVA regularity assumptions are satisfied) the sums of squares have scaled chi - squared distributions, with the corresponding degrees of freedom . The F - test statistic is the ratio, after scaling by the degrees of freedom . If there is no difference between population means this ratio follows an F distribution with 2 and 3n − 3 degrees of freedom . </P> <P> In some complicated settings, such as unbalanced split - plot designs, the sums - of - squares no longer have scaled chi - squared distributions . Comparison of sum - of - squares with degrees - of - freedom is no longer meaningful, and software may report certain fractional' degrees of freedom' in these cases . Such numbers have no genuine degrees - of - freedom interpretation, but are simply providing an approximate chi - squared distribution for the corresponding sum - of - squares . The details of such approximations are beyond the scope of this page . </P>

The degrees of freedom associated with the regression sum of squares equals ____