This page presents the simplest single layer neural network you can create with TensorFlow. This example shows and details how to create your first simple neural networks.
The following has been performed with the following version:
Try the example online on Google Colaboratory.
The goal of this simple example is to approximate a linear function given by the following equation:
$$ y = a.x + b = 0.6x + 2 $$
The blue dots are the training set, the red line is the output of the network:
Here is the complete source code. Each line is explained in the next section. This example can be run online on Google Colaboratory
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
# Parameters (y = a*x + b)
a=0.6
b=2
# Create noisy data
x_data = np.linspace(-10, 10, num=100000)
y_data = a * x_data + b + np.random.normal(size=100000)
# Create the model
model = keras.Sequential()
model.add(keras.layers.Dense(units = 1, activation = 'linear', input_shape=[1]))
model.compile(loss='mse', optimizer="adam")
# Display the model (only 2 parameters to optimize)
model.summary()
# Learn
model.fit( x_data, y_data, epochs=5, verbose=1 )
# Predict (compute) the output
y_predicted = model.predict(x_data)
# Display the result
plt.scatter(x_data[::500], y_data[::500])
plt.plot(x_data, y_predicted, 'r', linewidth=4)
plt.grid()
plt.show()
First, we import the libraries:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
Then, we create the training data. x_data
is composed of 100000 points, and
normal noise is added to the y-coordinate of each point:
# Parameters (y = a*x + b)
a=0.6
b=2
# Create noisy data
x_data = np.linspace(-10, 10, num=100000)
y_data = a * x_data + b + np.random.normal(size=100000)
Here is a sample of the training set:
Once our training dataset is built, we can create our network. In TensorFlow this is called the model:
# Create the model
model = keras.Sequential()
model.add(keras.layers.Dense(units = 1, activation = 'linear', input_shape=[1]))
model.compile(loss='mse', optimizer="adam")
# Display the model (only 2 parameters to optimize)
model.summary()
Let's analyse the code. First we create a sequential Keras model. The Sequential
model is a linear stack of layers. Keras is the core library for building neural
networks.
We add a regular densely-connected layer to our model (Dense
) with:
units = 1
)activation = 'linear'
)input_shape=[1]
)The model is compiled with the following optimization parameters:
optimizer="adam"
), more info hereloss='mse'
). More information about metrics on this pageOnce the model is defined, let's train our network:
x_data
is the inputy_data
is the expected outputepochs=5
means our network will be trained 5 times with our datasetverbose=1
display progression and loss in the console. # Learn
model.fit( x_data, y_data, epochs=5, verbose=1)
It should display something like:
Train on 100000 samples
Epoch 1/5
100000/100000 [==============================] - 4s 40us/sample - loss: 34.6061
Epoch 2/5
100000/100000 [==============================] - 4s 35us/sample - loss: 1.0512
Epoch 3/5
100000/100000 [==============================] - 3s 26us/sample - loss: 1.0080
Epoch 4/5
100000/100000 [==============================] - 3s 29us/sample - loss: 1.0080
Epoch 5/5
100000/100000 [==============================] - 3s 28us/sample - loss: 1.0082
Once trainning is over, we can predict and display the output for each input:
# Predict (compute) the output
y_predicted = model.predict(x_data)
# Display the result
plt.scatter(x_data[::500], y_data[::500])
plt.plot(x_data, y_predicted, 'r', linewidth=4)
plt.grid()
plt.show()
Here is the result:
Let's have a deeper look at our network. Once the network is trained, let's print the weights of our network:
>>> print( model.trainable_variables )
[<tf.Variable 'dense_6/kernel:0' shape=(1, 1) dtype=float32, numpy=array([[0.5970049]], dtype=float32)>, <tf.Variable 'dense_6/bias:0' shape=(1,) dtype=float32, numpy=array([1.9903255], dtype=float32)>]
Our weights are 0.5970049 and 1.9903255, almost 0.6 and 2 our initial parameters!
You can try this example online on Google Colaboratory