Wandb
First thing first, you need to install wandb with
pip install wandb
Create a free account then run
wandb login
in your terminal. Follow the link to get an API token that you will need to paste, then you’re all set!
WandbCallback
WandbCallback (log:str=None, log_preds:bool=True, log_preds_every_epoch:bool=False, log_model:bool=False, model_name:str=None, log_dataset:bool=False, dataset_name:str=None, valid_dl:fastai.data.core.TfmdDL=None, n_preds:int=36, seed:int=12345, reorder=True)
Saves model topology, losses & metrics
Type | Default | Details | |
---|---|---|---|
log | str | None | What to log (can be gradients , parameters , all or None) |
log_preds | bool | True | Whether to log model predictions on a wandb.Table |
log_preds_every_epoch | bool | False | Whether to log predictions every epoch or at the end |
log_model | bool | False | Whether to save the model checkpoint to a wandb.Artifact |
model_name | str | None | The name of the model_name to save, overrides SaveModelCallback |
log_dataset | bool | False | Whether to log the dataset to a wandb.Artifact |
dataset_name | str | None | A name to log the dataset with |
valid_dl | TfmdDL | None | If log_preds=True , then the samples will be drawn from valid_dl |
n_preds | int | 36 | How many samples to log predictions |
seed | int | 12345 | The seed of the samples drawn |
reorder | bool | True |
Optionally logs weights and or gradients depending on log
(can be “gradients”, “parameters”, “all” or None), sample predictions if log_preds=True
that will come from valid_dl
or a random sample of the validation set (determined by seed
). n_preds
are logged in this case.
If used in combination with SaveModelCallback
, the best model is saved as well (can be deactivated with log_model=False
).
Datasets can also be tracked:
- if
log_dataset
isTrue
, tracked folder is retrieved fromlearn.dls.path
log_dataset
can explicitly be set to the folder to track- the name of the dataset can explicitly be given through
dataset_name
, otherwise it is set to the folder name - Note: the subfolder “models” is always ignored
For custom scenarios, you can also manually use functions log_dataset
and log_model
to respectively log your own datasets and models.
Learner.gather_args
Learner.gather_args ()
Gather config parameters accessible to the learner
Learner.gather_args
Learner.gather_args ()
Gather config parameters accessible to the learner
log_dataset
log_dataset (path, name=None, metadata={}, description='raw dataset')
Log dataset folder
log_model
log_model (path, name=None, metadata={}, description='trained model')
Log model file
Example of use:
Once your have defined your Learner
, before you call to fit
or fit_one_cycle
, you need to initialize wandb:
import wandb
wandb.init()
To use Weights & Biases without an account, you can call wandb.init(anonymous='allow')
.
Then you add the callback to your learner
or call to fit
methods, potentially with SaveModelCallback
if you want to save the best model:
from fastai.callback.wandb import *
# To log only during one training phase
learn.fit(..., cbs=WandbCallback())
# To log continuously for all training phases
learn = learner(..., cbs=WandbCallback())
Datasets and models can be tracked through the callback or directly through log_model
and log_dataset
functions.
For more details, refer to W&B documentation.