<Dd> ∂ net j ∂ w i j = ∂ ∂ w i j (∑ k = 1 n w k j o k) = ∂ ∂ w i j w i j o i = o i . (\ displaystyle (\ frac (\ partial (\ text (net)) _ (j)) (\ partial w_ (ij))) = (\ frac (\ partial) (\ partial w_ (ij))) \ left (\ sum _ (k = 1) ^ (n) w_ (kj) o_ (k) \ right) = (\ frac (\ partial) (\ partial w_ (ij))) w_ (ij) o_ (i) = o_ (i).) </Dd> <P> If the neuron is in the first layer after the input layer, o i (\ displaystyle o_ (i)) is just x i (\ displaystyle x_ (i)). </P> <P> The derivative of the output of neuron j (\ displaystyle j) with respect to its input is simply the partial derivative of the activation function (assuming here that the logistic function is used): </P> <Dl> <Dd> ∂ o j ∂ net j = ∂ ∂ net j φ (net j) = φ (net j) (1 − φ (net j)) (\ displaystyle (\ frac (\ partial o_ (j)) (\ partial (\ text (net)) _ (j))) = (\ frac (\ partial) (\ partial (\ text (net)) _ (j))) \ varphi ((\ text (net)) _ (j)) = \ varphi ((\ text (net)) _ (j)) (1 - \ varphi ((\ text (net)) _ (j)))) </Dd> </Dl>

Explain the backpropagation learning algorithm with an example