FriconiX
Free collection of beautiful vector icons for your web pages.

Single layer limitations

This page presents with a simple example the main limitation of single layer neural networks.

Network architecture

Let's consider the following single-layer network architecture with two inputs ( \(a, b \) ) and one output ( \(y\) ).

Single layer architecture for logic function approximation

Logic OR function

Let's assume we want to train an artificial single-layer neural network to learn logic functions. Let's start with the OR logic function:

a b y = a + b
0 0 0
0 1 1
1 0 1
1 1 1

The space of the OR fonction can be drawn. X-axis and Y-axis are respectively the \( a \) and \( b\) inputs. The green line is the separation line ( \( y=0 \) ). As illustrated below, the network can find an optimal solution:

Optimal solution for the logic function OR approximation

Logic XOR function

Assume we now want to train the network on the XOR logic function:

a b y = a ⊕ b
0 0 0
0 1 1
1 0 1
1 1 0

As for the OR function, space can be drawn. Unfortunatly, the network isn't able to disriminate ones from zeros.

Single layer neural network fail to learn XOR function

Conclusion

The transfert function of this single-layer network is given by:

$$ \begin{equation} y= w_1a + w_2b +w_3 \label{eq:transfert-function} \end{equation} $$

The equation \( \eqref{eq:transfert-function} \) is a linear model. This explain why the frontier between ones and zeros is necessary a line. The XOR function is a non-linear problem that can't be classified with a linear model. Fortunatly, multilayer perceptron (MLP) can deal with non-linear problems.

See also


Last update : 03/11/2020