from fastai.vision.all import *
Pure PyTorch to fastai
We’re going to use the MNIST training code from the official PyTorch examples, slightly reformatted for space, updated from AdaDelta to AdamW, and converted from a script to a module. There’s a lot of code, so we’ve put it into migrating_pytorch.py!
The source script for migrating_pytorch
is in the examples
subdirectory of this folder if you checked out the fastai
repo from git, or can be downloaded from here if you’re using an online viewer such as Colab.
from migrating_pytorch import *
We can entirely replace the custom training loop with fastai’s. That means you can get rid of train()
, test()
, and the epoch loop in the original code, and replace it all with just this:
= DataLoaders(train_loader, test_loader)
data = Learner(data, Net(), loss_func=F.nll_loss, opt_func=Adam, metrics=accuracy) learn
Data is automatically moved to the GPU or CPU depending on what’s available, without the need of extra Callbacks or overhead.
fastai supports many schedulers. We recommend fitting with one cycle training:
learn.fit_one_cycle(epochs, lr)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 0.130664 | 0.049394 | 0.984200 | 01:16 |
As you can see, migrating from pure PyTorch allows you to remove a lot of code, and doesn’t require you to change any of your existing data pipelines, optimizers, loss functions, models, etc.
Once you’ve made this change, you can then benefit from fastai’s rich set of callbacks, transforms, visualizations, and so forth.
Note that fastai is much more than just a training loop (although we’re only using the training loop in this example) - it is a complete framework including GPU-accelerated transformations, end-to-end inference, integrated applications for vision, text, tabular, and collaborative filtering, and so forth. You can use any part of the framework on its own, or combine them together, as described in the fastai paper.