A perceptron was the first algorithm proposed in history of artificial neural networks.
The word *perceptron* is nowaday associated to its graphical representation:

The perceptron is the graphical representation of a mathematical function composed of two parts. The left side is the weighted sum of the inputs \( x_i \), while the right side is a general function called activation function choosen according to the problem the netwok will have to solve. Usualy, it is recommended to pick diferentiable functions. The global trasfert function of a perceptron is given by:

$$ y(x_i,w_i)= f(w_1 x_1 + w_2 x_2 + ... + w_N x_N) $$

$$ y(x_i,w_i)= \sum_{i=1}^N w_i x_i $$

The perceptron, also called neuron, is the basic building block. By assembling them it becomes possible to create more complex networks.

- Neural networks curve fitting
- Datasets for deep learning
- Gradient descent example
- Learning rule demonstration
- Linear regression example
- Most popular activation functions for deep learning
- Simplest neural network with TensorFlow
- Simplest perceptron
- Single layer training algorithm
- Single layer classification example
- Gradient descent for neural networks
- Single layer limitations
- Neural networks

Last update : 01/30/2021