Simplest perceptron

Simple perceptron

Let's consider the following simple perceptron with a transfert function given by \( f(x)=x \) to keep the maths simple:

Overview of a perceptron

Transfert function

The perceptron global transfert function is given by the following equation:

$$ \begin{equation} y= w_1.x_1 + w_2.x_2 + ... + w_N.x_N = \sum\limits_{i=1}^N w_i.x_i \label{eq:transfert-function} \end{equation} $$

Error (or loss)

In artificial neural networks, the error we want to minimize is:

$$ \begin{equation} E=(y'-y)^2 \label{eq:error} \end{equation} $$

with:

In practice and to simplify the maths, this error is divided by two:

$$ E=\frac{1}{2}(y'-y)^2 $$

Gradient descent

The algorithm (gradient descent) used to train the network (i.e. updating the weights) is given by:

$$ w_i'=w_i-\eta.\frac{dE}{dw_i} $$

where:

Derivating the error

Let's derivate the error:

$$ \begin{equation} \frac{dE}{dw_i} = \frac{1}{2}\frac{d}{dw_i}(y'-y)^2 \label{eq:eq-error} \end{equation} $$

Thanks to the chain rule

$$ (f \circ g)'=(f' \circ g).g' $$

equation \( \eqref{eq:eq-error} \) can be rewritten as:

$$ \frac{dE}{dw_i} = \frac{2}{2}(y'-y)\frac{d}{dw_i} (y'-y) = -(y'-y)\frac{dy}{dw_i} $$

As \( y= w_1.x_1 + w_2.x_2 + ... + w_N.x_N \):

$$ \frac{dE}{dw_i} = -(y'-y)\frac{d}{dw_i}(w_1.x_1 + w_2.x_2 + ... + w_N.x_N) = -(y'-y)x_i $$

Updating the weights

The weights can be updated with the following formula:

$$ w_i'=w_i-\eta.\frac{dE}{dw_i} = w_i+\eta(y'-y)x_i $$

In conclusion:

$$ w_i'= w_i + \eta(y'-y)x_i $$

See also


Last update : 11/24/2021