<P> In the field of Artificial Intelligence, inference engine is a component of the system that applies logical rules to the knowledge base to deduce new information . The first inference engines were components of expert systems . The typical expert system consisted of a knowledge base and an inference engine . The knowledge base stored facts about the world . The inference engine applies logical rules to the knowledge base and deduced new knowledge . This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine . Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining . Forward chaining starts with the known facts and asserts new facts . Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved . </P> <P> The logic that an inference engine uses is typically represented as IF - THEN rules . The general format of such rules is IF <logical expression> THEN <logical expression>. Prior to the development of expert systems and inference engines artificial intelligence researchers focused on more powerful theorem prover environments that offered much fuller implementations of First Order Logic . For example, general statements that included universal quantification (for all X some statement is true) and existential quantification (there exists some X such that some statement is true). What researchers discovered is that the power of these theorem proving environments was also their drawback . It was far too easy to create logical expressions that could take an indeterminate or even infinite time to terminate . For example, it is common in universal quantification to make statements over an infinite set such as the set of all natural numbers . Such statements are perfectly reasonable and even required in mathematical proofs but when included in an automated theorem prover executing on a computer may cause the computer to fall into an infinite loop . Focusing on IF - THEN statements (what logicians call Modus Ponens) still gave developers a very powerful general mechanism to represent logic but one that could be used efficiently with computational resources . What is more there is some psychological research that indicates humans also tend to favor IF - THEN representations when storing complex knowledge . </P>

Explain the operating principle of the inference engine
find me the text answering this question