This module allows the forward and backward passes of your neural net to be done in fp16 (also known as half precision). This is particularly important if you have an NVIDIA GPU with tensor cores, since it can speed up your training by 200% or more.
path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]).to_fp16() learn.fit_one_cycle(1)
Total time: 00:02 epoch train loss valid loss accuracy 1 0.172265 0.146274 0.947498 (00:02)
Details about mixed precision training are available in NVIDIA's documentation. We will just summarize the basics here.
The only parameter you may want to tweak is
loss_scale. This is used to scale the loss up, so that it doesn't underflow fp16, leading to loss of accuracy (this is reversed for the final gradient calculation after converting back to fp32). Generally the default
512 works well, however. You can also enable or disable the flattening of the master parameter tensor with
flat_master=True, however in our testing the different is negligible.
Internally, the callback ensures that all model parameters (except batchnorm layers, which require fp32) are converted to fp16, and an fp32 copy is also saved. The fp32 copy (the
master parameters) is what is used for actually updating with the optimizer; the fp16 parameters are used for calculating gradients. This helps avoid underflow with small learning rates.
All of this is implemented by the following Callback.
You don't have to call the following functions yourself - they're called by the callback framework automatically. They're just documented here so you can see exactly what the callback is doing.