<Dd> (y t = α + β x t ∗ + ε t, x t = x t ∗ + η t, (\ displaystyle (\ begin (cases) y_ (t) = \ alpha + \ beta x_ (t) ^ (*) + \ varepsilon _ (t), \ \ x_ (t) = x_ (t) ^ (*) + \ eta _ (t), \ end (cases))) </Dd> <P> where all variables are scalar . Here α and β are the parameters of interest, whereas σ and σ--standard deviations of the error terms--are the nuisance parameters . The "true" regressor x * is treated as a random variable (structural model), independent from the measurement error η (classic assumption). </P> <P> This model is identifiable in two cases: (1) either the latent regressor x * is not normally distributed, (2) or x * has normal distribution, but neither ε nor η are divisible by a normal distribution . That is, the parameters α, β can be consistently estimated from the data set (x t, y t) t = 1 T (\ displaystyle \ scriptstyle (x_ (t), \, y_ (t)) _ (t = 1) ^ (T)) without any additional information, provided the latent regressor is not Gaussian . </P> <P> Before this identifiability result was established, statisticians attempted to apply the maximum likelihood technique by assuming that all variables are normal, and then concluded that the model is not identified . The suggested remedy was to assume that some of the parameters of the model are known or can be estimated from the outside source . Such estimation methods include </P>

Linear regression with errors in x and y