Callback and helper function to track progress of training or log results
from fastai.test_utils import *

class ProgressCallback[source]

ProgressCallback(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, after_backward=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

A Callback to handle the display of progress bars

learn = synth_learner()
learn.fit(5)
epoch train_loss valid_loss time
0 12.347102 12.212431 00:00
1 10.881455 8.909552 00:00
2 9.235189 5.996475 00:00
3 7.660278 3.935604 00:00
4 6.281022 2.550308 00:00

Learner.no_bar[source]

Learner.no_bar()

Context manager that deactivates the use of progress bars

learn = synth_learner()
with learn.no_bar(): learn.fit(5)
[0, 19.08680534362793, 15.95688533782959, '00:00']
[1, 16.612403869628906, 11.260307312011719, '00:00']
[2, 13.902640342712402, 7.271045684814453, '00:00']
[3, 11.376837730407715, 4.473145484924316, '00:00']
[4, 9.196401596069336, 2.684819221496582, '00:00']

ProgressCallback.before_fit[source]

ProgressCallback.before_fit()

Setup the master bar over the epochs

ProgressCallback.before_epoch[source]

ProgressCallback.before_epoch()

Update the master bar

ProgressCallback.before_train[source]

ProgressCallback.before_train()

Launch a progress bar over the training dataloader

ProgressCallback.before_validate[source]

ProgressCallback.before_validate()

Launch a progress bar over the validation dataloader

ProgressCallback.after_batch[source]

ProgressCallback.after_batch()

Update the current progress bar

ProgressCallback.after_train[source]

ProgressCallback.after_train()

Close the progress bar over the training dataloader

ProgressCallback.after_validate[source]

ProgressCallback.after_validate()

Close the progress bar over the validation dataloader

ProgressCallback.after_fit[source]

ProgressCallback.after_fit()

Close the master bar

class ShowGraphCallback[source]

ShowGraphCallback(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, after_backward=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

Update a graph of training and validation loss

learn = synth_learner(cbs=ShowGraphCallback())
learn.fit(5)
epoch train_loss valid_loss time
0 23.818842 24.120615 00:00
1 20.847765 16.936844 00:00
2 17.464808 11.186396 00:00
3 14.341479 7.079573 00:00
4 11.646442 4.333014 00:00
learn.predict(torch.tensor([[0.1]]))
(tensor([1.3757]), tensor([1.3757]), tensor([1.3757]))

class CSVLogger[source]

CSVLogger(fname='history.csv', append=False) :: Callback

Basic class handling tweaks of the training loop by changing a Learner in various events

The results are appended to an existing file if append, or they overwrite it otherwise.

learn = synth_learner(cbs=CSVLogger())
learn.fit(5)
epoch train_loss valid_loss time
0 14.464121 16.486717 00:00
1 12.813704 12.088200 00:00
2 10.909700 8.334162 00:00
3 9.097164 5.539018 00:00
4 7.499405 3.602823 00:00

CSVLogger.read_log[source]

CSVLogger.read_log()

Convenience method to quickly access the log.

df = learn.csv_logger.read_log()
test_eq(df.columns.values, learn.recorder.metric_names)
for i,v in enumerate(learn.recorder.values):
    test_close(df.iloc[i][:3], [i] + v)
os.remove(learn.path/learn.csv_logger.fname)

CSVLogger.before_fit[source]

CSVLogger.before_fit()

Prepare file with metric names.

CSVLogger.after_fit[source]

CSVLogger.after_fit()

Close the file and clean up.