Welcome to CS With James
In this tutorial I will discuss how to build your first Neural Network (NN) train it on your machine.
I will explain the code line by line.
#Import the dependencies
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
This will allow the python to load the module
batch_size = 500
num_classes = 10
epochs = 10
This is Hyper-parameters used in the Machine Learning. I will explain more details later in different tutorial
#Load the Data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
img_height, img_width = x_train.shape,x_train.shape
This is method to load the data to your memory. Later I will explain how to load your own data.
#Change to label to one-hot encoding
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
MNIST dataset contain 0~9 but in machine learning we can’t use the integer number. instead use one hot encoding.
0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
5 is represented as [0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
#Define the Model
model = Sequential()
model.add(Dense(32, activation='sigmoid', input_shape=(784,)))
In this step we are going to define the model. In today’s tutorial I will only use the Dense() Layer. Every Dense layer need the Parameters of #Neurons and activation function, also the first layer need the input_shape. For now we are going to use sigmoid function for all the layers except last layer which uses softmax function.
#Print the model summary
#Determine Loss function and Optimizer
We are using this model to do the classification, so we use the categorical_crossentropy function. Use Adam for the Optimizer and accuracy for the metrics I will make a different tutorial on that later.
#Train the model
model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data =(x_test, y_test))
You can train your model using the .fit
x_train is the input dataset to train
y_train is the answer(Label) for each input
batch_size is how many inputs you want to train at the same time
epochs is how many iteration of training on the dataset
validation_data is different dataset for check if the model is training correctly.
#Test the Model
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score)
print('Test accuracy:', score)
Use .evaluate to test the final model
I have trained the model and I have got 91.41% of the accuracy. which is pretty good but in the next step I will increase this accuracy little by little