Dive into Deep Learning
Table Of Contents
Dive into Deep Learning
Table Of Contents

File I/O

So far we discussed how to process data, how to build, train and test deep learning models. However, at some point we are likely happy with what we obtained and we want to save the results for later use and distribution. Likewise, when running a long training process it is best practice to save intermediate results (checkpointing) to ensure that we don’t lose several days worth of computation when tripping over the power cord of our server. At the same time, we might want to load a pretrained model (e.g. we might have word embeddings for English and use it for our fancy spam classifier). For all of these cases we need to load and store both individual weight vectors and entire models. This section addresses both issues.

NDArray

In its siplest form, we can directly use the save and load functions to store and read NDArrays separately. This works just as expected.

In [1]:
from mxnet import nd
from mxnet.gluon import nn

x = nd.arange(4)
nd.save('x-file', x)

Then, we read the data from the stored file back into memory.

In [2]:
x2 = nd.load('x-file')
x2
Out[2]:
[
 [0. 1. 2. 3.]
 <NDArray 4 @cpu(0)>]

We can also store a list of NDArrays and read them back into memory.

In [3]:
y = nd.zeros(4)
nd.save('x-files', [x, y])
x2, y2 = nd.load('x-files')
(x2, y2)
Out[3]:
(
 [0. 1. 2. 3.]
 <NDArray 4 @cpu(0)>,
 [0. 0. 0. 0.]
 <NDArray 4 @cpu(0)>)

We can even write and read a dictionary that maps from a string to an NDArray. This is convenient, for instance when we want to read or write all the weights in a model.

In [4]:
mydict = {'x': x, 'y': y}
nd.save('mydict', mydict)
mydict2 = nd.load('mydict')
mydict2
Out[4]:
{'x':
 [0. 1. 2. 3.]
 <NDArray 4 @cpu(0)>, 'y':
 [0. 0. 0. 0.]
 <NDArray 4 @cpu(0)>}

Gluon Model Parameters

Saving individual weight vectors (or other NDArray tensors) is useful but it gets very tedious if we want to save (and later load) an entire model. After all, we might have hundreds of parameter groups sprinkled throughout. Writing a script that collects all the terms and matches them to an architecture is quite some work. For this reason Gluon provides built-in functionality to load and save entire networks rather than just single weight vectors. An important detail to note is that this saves model parameters and not the entire model. I.e. if we have a 3 layer MLP we need to specify the architecture separately. The reason for this is that the models themselves can contain arbitrary code, hence they cannot be serialized quite so easily (there is a way to do this for compiled models - please refer to the MXNet documentation for the technical details on it). The result is that in order to reinstate a model we need to generate the architecture in code and then load the parameters from disk. The deferred initialization is quite advantageous here since we can simply define a model without the need to put actual values in place. Let’s start with our favorite MLP.

In [5]:
class MLP(nn.Block):
    def __init__(self, **kwargs):
        super(MLP, self).__init__(**kwargs)
        self.hidden = nn.Dense(256, activation='relu')
        self.output = nn.Dense(10)

    def forward(self, x):
        return self.output(self.hidden(x))

net = MLP()
net.initialize()
x = nd.random.uniform(shape=(2, 20))
y = net(x)

Next, we store the parameters of the model as a file with the name ‘mlp.params’.

In [6]:
net.save_parameters('mlp.params')

To check whether we are able to recover the model we instantiate a clone of the original MLP model. Unlike the random initialization of model parameters, here we read the parameters stored in the file directly.

In [7]:
clone = MLP()
clone.load_parameters('mlp.params')

Since both instances have the same model parameters, the computation result of the same input x should be the same. Let’s verify this.

In [8]:
yclone = clone(x)
yclone == y
Out[8]:

[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
<NDArray 2x10 @cpu(0)>

Summary

  • The save and load functions can be used to perform File I/O for NDArray objects.
  • The load_parameters and save_parameters functions allow us to save entire sets of parameters for a network in Gluon.
  • Saving the architecture has to be done in code rather than in parameters.

Problems

  1. Even if there is no need to deploy trained models to a different device, what are the practical benefits of storing model parameters?
  2. Assume that we want to reuse only parts of a network to be incorporated into a network of a different architecture. How would you go about using, say the first two layers from a previous network in a new network.
  3. How would you go about saving network architecture and parameters? What restrictions would you impose on the architecture?

Discuss on our Forum