<Tr> <Th> 6 </Th> <Td> 7 </Td> <Td> 8 </Td> <Td> 9 </Td> <Td> 10 </Td> <Td> 11 </Td> <Td> 12 </Td> </Tr> <P> Here, in the earlier notation for the definition of conditional probability, the conditioning event B is that D1 + D2 ≤ 5, and the event A is D1 = 2 . We have P (A B) = P (A ∩ B) P (B) = 3 / 36 10 / 36 = 3 10, (\ displaystyle P (A B) = (\ tfrac (P (A \ cap B)) (P (B))) = (\ tfrac (3 / 36) (10 / 36)) = (\ tfrac (3) (10)),) as seen in the table . </P> <P> In statistical inference, the conditional probability is an update of the probability of an event based on new information . Incorporating the new information can be done as follows: </P> <Ul> <Li> Let A, the event of interest, be in the sample space, say (X, P). </Li> <Li> The occurrence of the event A knowing that event B has or will have occurred, means the occurrence of A as it is restricted to B, i.e. A ∩ B (\ displaystyle A \ cap B). </Li> <Li> Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P (A) </Li> <Li> The probability of A knowing that event B has or will have occurred, will be the probability of A ∩ B (\ displaystyle A \ cap B) relative to P (B), the probability that B has occurred . </Li> <Li> This results in P (A B) = P (A ∩ B) / P (B) (\ textstyle P (A B) = P (A \ cap B) / P (B)) whenever P (B)> 0 and 0 otherwise . </Li> </Ul>

D is independent of b when c is known