<P> The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity . The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data - generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches . </P> <P> However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically . In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum - entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling . </P> <P> The MDL principle has been applied in communication - coding theory in information theory, in linear regression, and in data mining . </P> <P> The evaluation of MDL - based inferential procedures often uses techniques or criteria from computational complexity theory . </P>

Which of the following is the best description of a casual inference