<P> The p - value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing . In this method, as part of experimental design, before performing the experiment, one first chooses a model (the null hypothesis) and a threshold value for p, called the significance level of the test, traditionally 5% or 1% and denoted as α . If the p - value is less than the chosen significance level (α), that suggests that the observed data is sufficiently inconsistent with the null hypothesis that the null hypothesis may be rejected . However, that does not prove that the tested hypothesis is true . When the p - value is calculated correctly, this test guarantees that the type I error rate is at most α . For typical analysis, using the standard α = 0.05 cutoff, the null hypothesis is rejected when p <. 05 and not rejected when p>. 05 . The p - value does not, in itself, support reasoning about the probabilities of hypotheses but is only a tool for deciding whether to reject the null hypothesis . </P> <P> Usually, X (\ displaystyle X) is a test statistic, rather than any of the actual observations . A test statistic is the output of a scalar function of all the observations . This statistic provides a single number, such as the average or the correlation coefficient, that summarizes the characteristics of the data, in a way relevant to a particular inquiry . As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data . </P> <P> For the important case in which the data are hypothesized to follow the normal distribution, depending on the nature of the test statistic and thus the underlying hypothesis of the test statistic, different null hypothesis tests have been developed . Some such tests are z - test for normal distribution, t - test for Student's t - distribution, f - test for f - distribution . When the data do not follow a normal distribution, it can still be possible to approximate the distribution of these test statistics by a normal distribution by invoking the central limit theorem for large samples, as in the case of Pearson's chi - squared test . </P> <P> Thus computing a p - value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a one - tailed test or a two - tailed test), and data . Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its cumulative distribution function (CDF) is often a difficult problem . Today, this computation is done using statistical software, often via numeric methods (rather than exact formulae), but, in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated p - values from these discrete values . Rather than using a table of p - values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed p - values; this corresponds to computing the quantile function (inverse CDF). </P>

What does the p value mean in context