<P> While in principle the acceptable level of statistical significance may be subject to debate, the p - value is the smallest significance level that allows the test to reject the null hypothesis . This is logically equivalent to saying that the p - value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic . Therefore, the smaller the p - value, the lower the probability of committing type I error . </P> <P> Some problems are usually associated with this framework (See criticism of hypothesis testing): </P> <Ul> <Li> A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this . One response involves going beyond reporting only the significance level to include the p - value when reporting whether a hypothesis is rejected or accepted . The p - value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies . A better and increasingly common approach is to report confidence intervals . Although these are produced from the same calculations as those of hypothesis tests or p - values, they describe both the size of the effect and the uncertainty surrounding it . </Li> <Li> Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise because the hypothesis testing approach forces one hypothesis (the null hypothesis) to be favored, since what is being evaluated is probability of the observed result given the null hypothesis and not probability of the null hypothesis given the observed result . An alternative to this approach is offered by Bayesian inference, although it requires establishing a prior probability . </Li> <Li> Rejecting the null hypothesis does not automatically prove the alternative hypothesis . </Li> <Li> As everything in inferential statistics it relies on sample size, and therefore under fat tails p - values may be seriously mis - computed . </Li> </Ul> <Li> A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this . One response involves going beyond reporting only the significance level to include the p - value when reporting whether a hypothesis is rejected or accepted . The p - value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies . A better and increasingly common approach is to report confidence intervals . Although these are produced from the same calculations as those of hypothesis tests or p - values, they describe both the size of the effect and the uncertainty surrounding it . </Li>

Statistics is related to which one of the following