<Dd> t β ^ = β ^ − β 0 s . e . (β ^) (\ displaystyle t_ (\ hat (\ beta)) = (\ frac ((\ hat (\ beta)) - \ beta _ (0)) (\ mathrm (s.e.) ((\ hat (\ beta)))))) </Dd> <P> where β is a non-random, known constant which may or may not match the actual unknown parameter value β, and s . e . (β ^) (\ displaystyle s.e. ((\ hat (\ beta)))) is the standard error of the estimator β ^ (\ displaystyle \ scriptstyle (\ hat (\ beta))) for β . By default, statistical packages report t - statistic with β = 0 (these t - statistics are used to test the significance of corresponding regressor). However, when t - statistic is needed to test the hypothesis of the form H: β = β, then a non-zero β may be used . </P> <P> If β ^ (\ displaystyle \ scriptstyle (\ hat (\ beta))) is an ordinary least squares estimator in the classical linear regression model (that is, with normally distributed and homoscedastic error terms), and if the true value of parameter β is equal to β, then the sampling distribution of the t - statistic is the Student's t - distribution with (n − k) degrees of freedom, where n is the number of observations, and k is the number of regressors (including the intercept). </P> <P> In the majority of models the estimator β ^ (\ displaystyle \ scriptstyle (\ hat (\ beta))) is consistent for β and distributed asymptotically normally . If the true value of parameter β is equal to β and the quantity s . e . (β ^) (\ displaystyle \ scriptstyle s.e. ((\ hat (\ beta)))) correctly estimates the asymptotic variance of this estimator, then the t - statistic will have asymptotically the standard normal distribution . </P>

How many degrees of freedom does the​ t-statistic have