A Callback that keeps track of the best value in monitor.
Type
Default
Details
monitor
str
valid_loss
value (usually loss or metric) being monitored.
comp
NoneType
None
numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
min_delta
float
0.0
minimum delta between the last monitor value and the best monitor value.
reset_on_fit
bool
True
before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
When implementing a Callback that has behavior that depends on the best value of a metric or loss, subclass this Callback and use its best (for best value so far) and new_best (there was a new best value this epoch) attributes. If you want to maintain best over subsequent calls to fit (e.g., Learner.fit_one_cycle), set reset_on_fit = True.
comp is the comparison operator used to determine if a value is best than another (defaults to np.less if ‘loss’ is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount.
A TrackerCallback that terminates training when monitored quantity stops improving.
Type
Default
Details
monitor
str
valid_loss
value (usually loss or metric) being monitored.
comp
NoneType
None
numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
min_delta
float
0.0
minimum delta between the last monitor value and the best monitor value.
patience
int
1
number of epochs to wait when training has not improved model.
reset_on_fit
bool
True
before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
comp is the comparison operator used to determine if a value is best than another (defaults to np.less if ‘loss’ is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. patience is the number of epochs you’re willing to wait without improvement.
A TrackerCallback that saves the model’s best during training and loads it at the end.
Type
Default
Details
monitor
str
valid_loss
value (usually loss or metric) being monitored.
comp
NoneType
None
numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
min_delta
float
0.0
minimum delta between the last monitor value and the best monitor value.
fname
str
model
model name to be used when saving model.
every_epoch
bool
False
if true, save model after every epoch; else save only when model is better than existing best.
at_end
bool
False
if true, save model when training ends; else load best model if there is only one saved model.
with_opt
bool
False
if true, save optimizer state (if any available) when saving model.
reset_on_fit
bool
True
before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
comp is the comparison operator used to determine if a value is best than another (defaults to np.less if ‘loss’ is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. Model will be saved in learn.path/learn.model_dir/name.pth, maybe every_epoch if True, every nth epoch if an integer is passed to every_epoch or at each improvement of the monitored quantity.
Better model found at epoch 0 with valid_loss value: 12.539285659790039.
Better model found at epoch 1 with valid_loss value: 12.123456001281738.
Better model found at epoch 0 with valid_loss value: 5.5791521072387695.
Better model found at epoch 1 with valid_loss value: 5.445522308349609.
Each of these three derived TrackerCallbacks (SaveModelCallback, ReduceLROnPlateu, and EarlyStoppingCallback) all have an adjusted order so they can each run with each other without interference. That order is as follows:
Note
in parenthesis is the actual Callback order number