Dive into Deep Learning
Table Of Contents
Dive into Deep Learning
Table Of Contents

Multilayer Perceptron in Gluon

Now that we learned how multilayer perceptrons (MLPs) work in theory, let’s implement them. We begin, as always, by importing modules.

In [1]:
import gluonbook as gb
from mxnet import gluon, init
from mxnet.gluon import loss as gloss, nn

The Model

The only difference from the softmax regression is the addition of a fully connected layer as a hidden layer. It has 256 hidden units and uses ReLU as the activation function.

In [2]:
net = nn.Sequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net.initialize(init.Normal(sigma=0.01))

One minor detail is of note when invoking net.add(). It adds one or more layers to the network. That is, an equivalent to the above lines would be net.add(nn.Dense(256, activation='relu'), nn.Dense(10)). Also note that Gluon automagically infers the missing parameteters, such as the fact that the second layer needs a matrix of size \(256 \times 10\). This happens the first time the network is invoked.

We use almost the same steps for softmax regression training as we do for reading and training the model.

In [3]:
batch_size = 256
train_iter, test_iter = gb.load_data_fashion_mnist(batch_size)

loss = gloss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.5})
num_epochs = 10
gb.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,
             None, None, trainer)
epoch 1, loss 0.8066, train acc 0.695, test acc 0.830
epoch 2, loss 0.4986, train acc 0.814, test acc 0.852
epoch 3, loss 0.4353, train acc 0.839, test acc 0.854
epoch 4, loss 0.4006, train acc 0.852, test acc 0.866
epoch 5, loss 0.3750, train acc 0.862, test acc 0.865
epoch 6, loss 0.3574, train acc 0.868, test acc 0.876
epoch 7, loss 0.3462, train acc 0.873, test acc 0.875
epoch 8, loss 0.3288, train acc 0.879, test acc 0.875
epoch 9, loss 0.3172, train acc 0.882, test acc 0.881
epoch 10, loss 0.3099, train acc 0.886, test acc 0.874

Problems

  1. Try adding a few more hidden layers to see how the result changes.
  2. Try out different activation functions. Which ones work best?
  3. Try out different initializations of the weights.

Discuss on our Forum