<P> For a t - distribution with ν (\ displaystyle \ nu) degrees of freedom, the expected value is 0 if ν> 1 (\ displaystyle \ nu> 1), and its variance is ν ν − 2 (\ displaystyle (\ frac (\ nu) (\ nu - 2))) if ν> 2 (\ displaystyle \ nu> 2). The skewness is 0 if ν> 3 (\ displaystyle \ nu> 3) and the excess kurtosis is 6 ν − 4 (\ displaystyle (\ frac (6) (\ nu - 4))) if ν> 4 (\ displaystyle \ nu> 4). </P> <P> There are various approaches to constructing random samples from the Student's t - distribution . The matter depends on whether the samples are required on a stand - alone basis, or are to be constructed by application of a quantile function to uniform samples; e.g., in the multi-dimensional applications basis of copula - dependency . In the case of stand - alone sampling, an extension of the Box--Muller method and its polar form is easily deployed . It has the merit that it applies equally well to all real positive degrees of freedom, ν, while many other candidate methods fail if ν is close to zero . </P> <P> The function A (t ν) is the integral of Student's probability density function, f (t) between − t and t, for t ≥ 0 . It thus gives the probability that a value of t less than that calculated from observed data would occur by chance . Therefore, the function A (t ν) can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of t and the probability of its occurrence if the two sets of data were drawn from the same population . This is used in a variety of situations, particularly in t - tests . For the statistic t, with ν degrees of freedom, A (t ν) is the probability that t would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that t ≥ 0). It can be easily calculated from the cumulative distribution function F (t) of the t - distribution: </P> <Dl> <Dd> A (t ν) = F ν (t) − F ν (− t) = 1 − I ν ν + t 2 (ν 2, 1 2), (\ displaystyle A (t \ nu) = F_ (\ nu) (t) - F_ (\ nu) (- t) = 1 - I_ (\ frac (\ nu) (\ nu + t ^ (2))) \ left ((\ frac (\ nu) (2)), (\ frac (1) (2)) \ right),) </Dd> </Dl>

Does t distribution have a mean of 0