First thing first, you need to install tensorboard with
pip install tensorboard
Then launch tensorboard with
tensorboard --logdir=runs
in your terminal. You can change the logdir as long as it matches the log_dir
you pass to TensorBoardCallback
(default is runs
in the working directory).
Tensorboard Embedding Projector is currently only supported for image classification
Tensorboard Embedding Projector is supported in TensorBoardCallback
(set parameter projector=True
) during training. The validation set embeddings will be written after each epoch.
cbs = [TensorBoardCallback(projector=True)]
learn = vision_learner(dls, resnet18, metrics=accuracy)
learn.fit_one_cycle(3, cbs=cbs)
To write the embeddings for a custom dataset (e. g. after loading a learner) use TensorBoardProjectorCallback
. Add the callback manually to the learner.
learn = load_learner('path/to/export.pkl')
learn.add_cb(TensorBoardProjectorCallback())
dl = learn.dls.test_dl(files, with_labels=True)
_ = learn.get_preds(dl=dl)
If using a custom model (non fastai-resnet) pass the layer where the embeddings should be extracted as a callback-parameter.
layer = learn.model[1][1]
cbs = [TensorBoardProjectorCallback(layer=layer)]
preds = learn.get_preds(dl=dl, cbs=cbs)
To export word embeddings from Language Models (tested with AWD_LSTM (fast.ai) and GPT2 / BERT (transformers)) but works with every model that contains an embedding layer.
For a fast.ai TextLearner or LMLearner just pass the learner - the embedding layer and vocab will be extracted automatically:
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
projector_word_embeddings(learn=learn, limit=2000, start=2000)
For other language models - like the ones in the transformers library - you'll have to pass the layer and vocab. Here's an example for a BERT model.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# get the word embedding layer
layer = model.embeddings.word_embeddings
# get and sort vocab
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
# write the embeddings for tb projector
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, start=2000)
from fastai.vision.all import Resize, RandomSubsetSplitter, aug_transforms, vision_learner, resnet18
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
learn = vision_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=TensorBoardCallback(Path.home()/'tmp'/'runs'/'tb', trace_model=True))
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.05, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
cbs = [TensorBoardCallback(log_dir=Path.home()/'tmp'/'runs'/'vision1', projector=True)]
learn = vision_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=cbs)
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
files = get_image_files(path/'images')
files = files[:256]
dl = learn.dls.test_dl(files, with_labels=True)
learn = vision_learner(dls, resnet18, metrics=accuracy)
layer = learn.model[1][0].ap
cbs = [TensorBoardProjectorCallback(layer=layer, log_dir=Path.home()/'tmp'/'runs'/'vision2')]
_ = learn.get_preds(dl=dl, cbs=cbs)
from fastai.text.all import TextDataLoaders, text_classifier_learner, AWD_LSTM
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
projector_word_embeddings(learn, limit=1000, log_dir=Path.home()/'tmp'/'runs'/'text')
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
layer = model.transformer.wte
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, log_dir=Path.home()/'tmp'/'runs'/'transformers')
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
layer = model.embeddings.word_embeddings
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, start=2000, log_dir=Path.home()/'tmp'/'runs'/'transformers')
Run the following command in the command line to check if the projector embeddings have been correctly wirtten:
tensorboard --logdir=~/tmp/runs
Open http://localhost:6006 in browser (TensorBoard Projector doesn't work correctly in Safari!)