Integration with tensorboard
 

First thing first, you need to install tensorboard with

pip install tensorboard

Then launch tensorboard with

tensorboard --logdir=runs

in your terminal. You can change the logdir as long as it matches the log_dir you pass to TensorBoardCallback (default is runs in the working directory).

Tensorboard Embedding Projector support

Tensorboard Embedding Projector is currently only supported for image classification

Export Image Feutures during Training

Tensorboard Embedding Projector is supported in TensorBoardCallback (set parameter projector=True) during training. The validation set embeddings will be written after each epoch.

cbs = [TensorBoardCallback(projector=True)]
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.fit_one_cycle(3, cbs=cbs)

Export Image Features during Inference

To write the embeddings for a custom dataset (e. g. after loading a learner) use TensorBoardProjectorCallback. Add the callback manually to the learner.

learn = load_learner('path/to/export.pkl')
learn.add_cb(TensorBoardProjectorCallback())
dl = learn.dls.test_dl(files, with_labels=True)
_ = learn.get_preds(dl=dl)

If using a custom model (non fastai-resnet) pass the layer where the embeddings should be extracted as a callback-parameter.

layer = learn.model[1][1]
cbs = [TensorBoardProjectorCallback(layer=layer)]
preds = learn.get_preds(dl=dl, cbs=cbs)

Export Word Embeddings from Language Models

To export word embeddings from Language Models (tested with AWD_LSTM (fast.ai) and GPT2 / BERT (transformers)) but works with every model that contains an embedding layer.

For a fast.ai TextLearner or LMLearner just pass the learner - the embedding layer and vocab will be extracted automatically:

dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
projector_word_embeddings(learn=learn, limit=2000, start=2000)

For other language models - like the ones in the transformers library - you'll have to pass the layer and vocab. Here's an example for a BERT model.

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")

# get the word embedding layer
layer = model.embeddings.word_embeddings

# get and sort vocab
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]

# write the embeddings for tb projector
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, start=2000)

class TensorBoardBaseCallback[source]

TensorBoardBaseCallback() :: Callback

Base class for tensorboard callbacks

class TensorBoardCallback[source]

TensorBoardCallback(log_dir=None, trace_model=True, log_preds=True, n_preds=9, projector=False, layer=None) :: TensorBoardBaseCallback

Saves model topology, losses & metrics for tensorboard and tensorboard projector during training

class TensorBoardProjectorCallback[source]

TensorBoardProjectorCallback(log_dir=None, layer=None) :: TensorBoardBaseCallback

Extracts and exports image featuers for tensorboard projector during inference

projector_word_embeddings[source]

projector_word_embeddings(learn=None, layer=None, vocab=None, limit=-1, start=0, log_dir=None)

Extracts and exports word embeddings from language models embedding layers

Test

from fastai.vision.all import Resize, RandomSubsetSplitter, aug_transforms, cnn_learner, resnet18

TensorBoardCallback

path = untar_data(URLs.PETS)

db = DataBlock(blocks=(ImageBlock, CategoryBlock), 
                  get_items=get_image_files, 
                  item_tfms=Resize(128),
                  splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
                  batch_tfms=aug_transforms(size=64),
                  get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))

dls = db.dataloaders(path/'images')
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=TensorBoardCallback(Path.home()/'tmp'/'runs'/'tb', trace_model=True))
epoch train_loss valid_loss accuracy time
0 5.099678 6.461753 0.054795 00:17
1 4.268377 5.123801 0.095890 00:15
2 3.713221 3.094202 0.178082 00:14

Projector

Projector in TensorBoardCallback

path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock), 
                  get_items=get_image_files, 
                  item_tfms=Resize(128),
                  splitter=RandomSubsetSplitter(train_sz=0.05, valid_sz=0.01),
                  batch_tfms=aug_transforms(size=64),
                  get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))

dls = db.dataloaders(path/'images')
cbs = [TensorBoardCallback(log_dir=Path.home()/'tmp'/'runs'/'vision1', projector=True)]
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=cbs)
epoch train_loss valid_loss accuracy time
0 5.187016 8.598857 0.054795 00:07
1 4.667810 6.271108 0.136986 00:07
2 4.169415 4.457378 0.136986 00:08

TensorBoardProjectorCallback

path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock), 
                  get_items=get_image_files, 
                  item_tfms=Resize(128),
                  splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
                  batch_tfms=aug_transforms(size=64),
                  get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))

dls = db.dataloaders(path/'images')
files = get_image_files(path/'images')
files = files[:256]
dl = learn.dls.test_dl(files, with_labels=True)
learn = cnn_learner(dls, resnet18, metrics=accuracy)
layer = learn.model[1][0].ap
cbs = [TensorBoardProjectorCallback(layer=layer, log_dir=Path.home()/'tmp'/'runs'/'vision2')]
_ = learn.get_preds(dl=dl, cbs=cbs)

projector_word_embeddings

fastai text or lm learner

from fastai.text.all import TextDataLoaders, text_classifier_learner, AWD_LSTM
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
projector_word_embeddings(learn, limit=1000, log_dir=Path.home()/'tmp'/'runs'/'text')

transformers

GPT2

 
#model = GPT2LMHeadModel.from_pretrained('gpt2')
 
#vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
 

BERT

 
#model = AutoModel.from_pretrained("bert-base-uncased")
 
#vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
 

Validate results in tensorboard

Run the following command in the command line to check if the projector embeddings have been correctly wirtten:

tensorboard --logdir=~/tmp/runs

Open http://localhost:6006 in browser (TensorBoard Projector doesn't work correctly in Safari!)