<P> Consider, as an example, the k - nearest neighbour smoother, which is the average of the k nearest measured values to the given point . Then, at each of the n measured points, the weight of the original value on the linear combination that makes up the predicted value is just 1 / k . Thus, the trace of the hat matrix is n / k . Thus the smooth costs n / k effective degrees of freedom . </P> <P> As another example, consider the existence of nearly duplicated observations . Naive application of classical formula, n − p, would lead to over-estimation of the residuals degree of freedom, as if each observation were independent . More realistically, though, the hat matrix H = X (X' Σ X) X' Σ would involve an observation covariance matrix Σ indicating the non-zero correlation among observations . The more general formulation of effective degree of freedom would result in a more realistic estimate for, e.g., the error variance σ, which in its turn scales the unknown parameters' a posteriori standard deviation; the degree of freedom will also affect the expansion factor necessary to produce an error ellipse for a given confidence level . </P> <P> Similar concepts are the equivalent degrees of freedom in non-parametric regression, the degree of freedom of signal in atmospheric studies, and the non-integer degree of freedom in geodesy . </P> <Table> <Tr> <Td> </Td> <Td> This section needs expansion . You can help by adding to it . (August 2013) </Td> </Tr> </Table>

What is the degrees of freedom in regression