<Dl> <Dd> log ⁡ p (C k ∣ x) ∝ log ⁡ (p (C k) ∏ i = 1 n p k i x i) = log ⁡ p (C k) + ∑ i = 1 n x i ⋅ log ⁡ p k i = b + w k ⊤ x (\ displaystyle (\ begin (aligned) \ log p (C_ (k) \ mid \ mathbf (x)) & \ varpropto \ log \ left (p (C_ (k)) \ prod _ (i = 1) ^ (n) (p_ (ki)) ^ (x_ (i)) \ right) \ \ & = \ log p (C_ (k)) + \ sum _ (i = 1) ^ (n) x_ (i) \ cdot \ log p_ (ki) \ \ & = b+ \ mathbf (w) _ (k) ^ (\ top) \ mathbf (x) \ end (aligned))) </Dd> </Dl> <Dd> log ⁡ p (C k ∣ x) ∝ log ⁡ (p (C k) ∏ i = 1 n p k i x i) = log ⁡ p (C k) + ∑ i = 1 n x i ⋅ log ⁡ p k i = b + w k ⊤ x (\ displaystyle (\ begin (aligned) \ log p (C_ (k) \ mid \ mathbf (x)) & \ varpropto \ log \ left (p (C_ (k)) \ prod _ (i = 1) ^ (n) (p_ (ki)) ^ (x_ (i)) \ right) \ \ & = \ log p (C_ (k)) + \ sum _ (i = 1) ^ (n) x_ (i) \ cdot \ log p_ (ki) \ \ & = b+ \ mathbf (w) _ (k) ^ (\ top) \ mathbf (x) \ end (aligned))) </Dd> <P> where b = log ⁡ p (C k) (\ displaystyle b = \ log p (C_ (k))) and w k i = log ⁡ p k i (\ displaystyle w_ (ki) = \ log p_ (ki)). </P> <P> If a given class and feature value never occur together in the training data, then the frequency - based probability estimate will be zero . This is problematic because it will wipe out all information in the other probabilities when they are multiplied . Therefore, it is often desirable to incorporate a small - sample correction, called pseudocount, in all probability estimates such that no probability is ever set to be exactly zero . This way of regularizing naive Bayes is called Laplace smoothing when the pseudocount is one, and Lidstone smoothing in the general case . </P>

Which of the following is an incorrect statement for the naive bayesian algorithm