Let's consider the following simple perceptron with a transfert function given by \( f(x)=x \) to keep the maths simple:

The perceptron global transfert function is given by the following equation:

$$ \begin{equation} y= w_1.x_1 + w_2.x_2 + ... + w_N.x_N = \sum\limits_{i=1}^N w_i.x_i \label{eq:transfert-function} \end{equation} $$

In artificial neural networks, the error we want to minimize is:

$$ \begin{equation} E=(y'-y)^2 \label{eq:error} \end{equation} $$

with:

- \(E\) the error
- \(y′\) the expected output (from training data set)
- \(y\) the real output of the network (from network)

In practice and to simplify the maths, this error is divided by two:

$$ E=\frac{1}{2}(y'-y)^2 $$

The algorithm (gradient descent) used to train the network (i.e. updating the weights) is given by:

$$ w_i'=w_i-\eta.\frac{dE}{dw_i} $$

where:

- \( wi \) the weight before update
- \( w′i \) the weight after update
- \( η \) the learning rate

Let's derivate the error:

$$ \begin{equation} \frac{dE}{dw_i} = \frac{1}{2}\frac{d}{dw_i}(y'-y)^2 \label{eq:eq-error} \end{equation} $$

Thanks to the chain rule

$$ (f \circ g)'=(f' \circ g).g' $$

equation \( \eqref{eq:eq-error} \) can be rewritten as:

$$ \frac{dE}{dw_i} = \frac{2}{2}(y'-y)\frac{d}{dw_i} (y'-y) = -(y'-y)\frac{dy}{dw_i} $$

As \( y= w_1.x_1 + w_2.x_2 + ... + w_N.x_N \):

$$ \frac{dE}{dw_i} = -(y'-y)\frac{d}{dw_i}(w_1.x_1 + w_2.x_2 + ... + w_N.x_N) = -(y'-y)x_i $$

The weights can be updated with the following formula:

$$ w_i'=w_i-\eta.\frac{dE}{dw_i} = w_i+\eta(y'-y)x_i $$

In conclusion:

$$ w_i'= w_i + \eta(y'-y)x_i $$

- Neural networks curve fitting
- Gradient descent example
- Learning rule demonstration
- Linear regression example
- Neural Network Perceptron
- Simplest neural network with TensorFlow
- Single layer training algorithm
- Single layer classification example
- Gradient descent for neural networks
- Single layer limitations
- Neural networks

Last update : 03/05/2020