Basic dataset for NLP tasks and helper functions to create a DataBunch

NLP datasets

This module contains the TextDataset class, which is the main dataset you should use for your NLP tasks. It automatically does the preprocessing steps described in text.transform. It also contains all the functions to quickly get a TextDataBunch ready.

Quickly assemble your data

You should get your data in one of the following formats to make the most of the fastai library and use one of the factory methods of one of the TextDataBunch classes:

  • raw text files in folders train, valid, test in an ImageNet style,
  • a csv where some column(s) gives the label(s) and the folowwing one the associated text,
  • a dataframe structured the same way,
  • tokens and labels arrays,
  • ids, vocabulary (correspondance id to word) and labels.

If you are assembling the data for a language model, you should define your labels as always 0 to respect those formats. The first time you create a DataBunch with one of those functions, your data will be preprocessed automatically. You can save it, so that the next time you call it is almost instantaneous.

Below are the classes that help assembling the raw data in a DataBunch suitable for NLP.

class TextLMDataBunch[source]

TextLMDataBunch(train_dl:DataLoader, valid_dl:DataLoader, test_dl:Optional[DataLoader]=None, device:device=None, tfms:Optional[Collection[Callable]]=None, path:PathOrStr='.', collate_fn:Callable='data_collate') :: TextDataBunch

Create a DataBunch suitable for language modeling: all the texts in the datasets are concatenated and the labels are ignored. Instead, the target is the next word in the sentence.

show_batch[source]

show_batch(sep=' ', ds_type:DatasetType=<DatasetType.Train: 1>, rows:int=10, max_len:int=100)

Show rows texts from a batch of ds_type, tokens are joined with sep, truncated at max_len.

class TextClasDataBunch[source]

TextClasDataBunch(train_dl:DataLoader, valid_dl:DataLoader, test_dl:Optional[DataLoader]=None, device:device=None, tfms:Optional[Collection[Callable]]=None, path:PathOrStr='.', collate_fn:Callable='data_collate') :: TextDataBunch

Create a DataBunch suitable for a text classifier: all the texts are grouped by length (with a bit of randomness for the training set) then padded.

show_batch[source]

show_batch(rows:int=None, ds_type:DatasetType=<DatasetType.Train: 1>, kwargs)

Show a batch of data in ds_type on a few rows.

class TextDataBunch[source]

TextDataBunch(train_dl:DataLoader, valid_dl:DataLoader, test_dl:Optional[DataLoader]=None, device:device=None, tfms:Optional[Collection[Callable]]=None, path:PathOrStr='.', collate_fn:Callable='data_collate') :: DataBunch

Create a DataBunch with the raw texts. This is only going to work if they all ahve the same lengths.

Factory methods (TextDataBunch)

All those classes have the following factory methods.

from_folder[source]

from_folder(path:PathOrStr, train:str='train', valid:str='valid', test:Optional[str]=None, classes:ArgStar=None, tokenizer:Tokenizer=None, vocab:Vocab=None, kwargs)

This function will create a DataBunch from texts placed in path in a train, valid and maybe test folders. Text files in the train and valid folders should be places in subdirectories according to their classes (always the same for a language model) and the ones for the test folder should all be placed there directly. tokenizer will be used to parse those texts into tokens. The shuffle flag will optionally shuffle the texts found.

You can pass a specific vocab for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the TextDataset function and to the class initialization, you can precise there parameters such as max_vocab, chunksize, min_freq, n_labels (see the TextDataset documentation) or bs, bptt and pad_idx (see the sections LM data and classifier data).

from_csv[source]

from_csv(path:PathOrStr, csv_name, valid_pct:float=0.2, test:Optional[str]=None, tokenizer:Tokenizer=None, vocab:Vocab=None, classes:StrList=None, header='infer', text_cols:Union[int, Collection[int], str, StrList]=1, label_cols:Union[int, Collection[int], str, StrList]=0, label_delim:str=None, kwargs) → DataBunch

This function will create a DataBunch from texts placed in path in a csv file and maybe test csv file opened with header. You can specify txt_cols and lbl_cols or just an integer n_labels in which case the label(s) should be the first column(s). tokenizer will be used to parse those texts into tokens.

You can pass a specific vocab for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the TextDataset function and to the class initialization, you can precise there parameters such as max_vocab, chunksize, min_freq, n_labels (see the TextDataset documentation) or bs, bptt and pad_idx (see the sections LM data and classifier data).

from_df[source]

from_df(path:PathOrStr, train_df:DataFrame, valid_df:DataFrame, test_df:OptDataFrame=None, tokenizer:Tokenizer=None, vocab:Vocab=None, classes:StrList=None, text_cols:Union[int, Collection[int], str, StrList]=1, label_cols:Union[int, Collection[int], str, StrList]=0, label_delim:str=None, kwargs) → DataBunch

This function will create a DataBunch in path from texts in train_df, valid_df and maybe test_df. By default, those are opened with header=infer but you can specify another value in the kwargs. You can specify txt_cols and lbl_cols or just an integer n_labels in which case the label(s) should be the first column(s). tokenizer will be used to parse those texts into tokens.

You can pass a specific vocab for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the TextDataset function and to the class initialization, you can precise there parameters such as max_vocab, chunksize, min_freq, n_labels (see the TextDataset documentation) or bs, bptt and pad_idx (see the sections LM data and classifier data).

from_tokens[source]

from_tokens(path:PathOrStr, trn_tok:Tokens, trn_lbls:Collection[Union[int, float]], val_tok:Tokens, val_lbls:Collection[Union[int, float]], vocab:Vocab=None, tst_tok:Tokens=None, classes:ArgStar=None, kwargs) → DataBunch

This function will create a DataBunch from trn_tok, trn_lbls, val_tok, val_lbls and maybe tst_tok.

You can pass a specific vocab for the numericalization step (if you are building a classifier from a language model you fine-tuned for instance). kwargs will be split between the TextDataset function and to the class initialization, you can precise there parameters such as max_vocab, chunksize, min_freq, n_labels, tok_suff and lbl_suff (see the TextDataset documentation) or bs, bptt and pad_idx (see the sections LM data and classifier data).

from_ids[source]

from_ids(path:PathOrStr, vocab:Vocab, train_ids:Collection[Collection[int]], valid_ids:Collection[Collection[int]], test_ids:Collection[Collection[int]]=None, train_lbls:Collection[Union[int, float]]=None, valid_lbls:Collection[Union[int, float]]=None, classes:ArgStar=None, processor:PreProcessor=None, kwargs) → DataBunch

This function will create a DataBunch in path from texts already processed into trn_ids, trn_lbls, val_ids, val_lbls and maybe tst_ids. You can specify the corresponding classes if applciable. You must specify the vocab so that the RNNLearner class can later infer the corresponding sizes in the model it will create. kwargs will be passed to the class initialization.

Load and save

To avoid losing time preprocessing the text data more than once, you should save/load your TextDataBunch using thse methods.

load[source]

load(path:PathOrStr, cache_name:PathOrStr='tmp', processor:PreProcessor=None, kwargs)

Load a TextDataBunch from path/cache_name. kwargs are passed to the dataloader creation.

save[source]

save(cache_name:PathOrStr='tmp')

Save the DataBunch in self.path/cache_name folder.

Example

Untar the IMDB sample dataset if not already done:

path = untar_data(URLs.IMDB_SAMPLE)
path
PosixPath('/home/ubuntu/.fastai/data/imdb_sample')

Since it comes in the form of csv files, we will use the corresponding text_data method. Here is an overview of what your file you should look like:

pd.read_csv(path/'texts.csv').head()
label text is_valid
0 negative Un-bleeping-believable! Meg Ryan doesn't even ... False
1 positive This is a extremely well-made film. The acting... False
2 negative Every once in a long while a movie will come a... False
3 positive Name just says it all. I watched this movie wi... False
4 negative This movie succeeds at being one of the most u... False

And here is a simple way of creating your DataBunch for language modelling or classification.

data_lm = TextLMDataBunch.from_csv(Path(path), 'texts.csv')
data_clas = TextClasDataBunch.from_csv(Path(path), 'texts.csv')

The TextList input classes

Behind the scenes, the previous functions will create a training, validation and maybe test TextList that will be tokenized and numericalized (if needed) using PreProcessor.

class TextList[source]

TextList(items:Iterator, vocab:Vocab=None, kwargs) :: ItemList

The basic ItemList for text data in items with the corresponding vocab.

label_for_lm[source]

label_for_lm(kwargs)

A special labelling method for language models.

class TextFilesList[source]

TextFilesList(items:Iterator, vocab:Vocab=None, processor=None, kwargs) :: TextList

The basic ItemList for text data stored in the files items with the corresponding vocab. An optional processor can be passed.

class OpenFileProcessor[source]

OpenFileProcessor() :: PreProcessor

Simple Preprocessor that opens the files in items and reads the texts inside them.

class TokenizeProcessor[source]

TokenizeProcessor(tokenizer:Tokenizer=None, chunksize:int=10000, mark_fields:bool=True) :: PreProcessor

Simple PreProcessor that tokenizes the texts in items using tokenizer by bits of chunsize. If mark_fields is True, add field tokens.

class NumericalizeProcessor[source]

NumericalizeProcessor(vocab:Vocab=None, max_vocab:int=60000, min_freq:int=2) :: PreProcessor

Numericalize the tokens with vocab (if not None) otherwise create one with max_vocab and min_freq from tokens.

Language Model data

A language model is trained to guess what the next word is inside a flow of words. We don't feed it the different texts separately but concatenate them all together in a big array. To create the batches, we split this array into bs chuncks of continuous texts. Note that in all NLP tasks, we use the pytoch convention of sequence length being the first dimension (and batch size being the second one) so we transpose that array so that we can read the chunks of texts in columns. Here is an example of batch from our imdb sample dataset.

path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
x,y = next(iter(data.train_dl))
example = x[:20,:10].cpu()
texts = pd.DataFrame([data.train_ds.vocab.textify(l).split(' ') for l in example])
texts
0 1 2 3 4 5 6 7 8 9
0 xxfld in michael that movie xxunk \n\n watch worst ,
1 1 fact moore they down " once a mistakes xxunk
2 i , , have was , this love of and
3 really other but more that " daily story my much
4 enjoyed than he in there sailor ( this life much
5 girl a also common would moon painful one so more
6 fight few follows in be " ) will far .
7 . good in their a and xxunk suffice , it
8 it scenes his old hour co. is . and stars
9 something , xxunk age of are over xxfld it kim
10 i this by than footage xxunk , 1 's bassenger
11 could character using they , . eric i only and
12 watch seems several thought then not rushes am half xxunk
13 over pretty of . basically to down glad done baldwin
14 and much moore even that mention to to . as
15 over wasted 's the same the his read i the
16 again . propaganda xxunk hour xxunk basement so seriously xxunk
17 . \n\n film willie repeated racial , many thought xxunk
18 the while - xxunk 4 / where negative it 's
19 acting i making that times gender all comments was .

Then, as suggested in this article from Stephen Merity et al., we don't use a fixed bptt through the different batches but slightly change it from batch to batch.

iter_dl = iter(data.train_dl)
for _ in range(5):
    x,y = next(iter_dl)
    print(x.size())
torch.Size([68, 64])
torch.Size([64, 64])
torch.Size([40, 64])
torch.Size([69, 64])
torch.Size([66, 64])

This is all done internally when we use TextLMDataBunch, by creating DataLoader using the following class:

class LanguageModelLoader[source]

LanguageModelLoader(dataset:LabelList, bs:int=64, bptt:int=70, backwards:bool=False, shuffle:bool=False, max_len:int=25)

Takes the texts from dataset and concatenate them all, then create a big array with bs columns (transposed from the data source so that we read the texts in the columns). Spits batches with a size approximately equal to bptt but changing at every batch. If backwards is True, reverses the original text. If shuffle is True, we shuffle the texts before concatenating them together at the start of each epoch. max_len is the maximum amount we add to bptt.

batchify[source]

batchify(data:ndarray) → LongTensor

Called at the inialization to create the big array of text ids from the data array.

get_batch[source]

get_batch(i:int, seq_len:int) → Tuple[LongTensor, LongTensor]

Create a batch at i of a given seq_len.

Classifier data

When preparing the data for a classifier, we keep the different texts separate, which poses another challenge for the creation of batches: since they don't all have the same length, we can't easily collate them together in batches. To help with this we use two different techniques:

  • padding: each text is padded with the PAD token to get all the ones we picked to the same size
  • sorting the texts (ish): to avoid having together a very long text with a very short one (which would then have a lot of PAD tokens), we regroup the texts by order of length. For the training set, we still add some randomness to avoid showing the same batches at every step of the training.

Here is an example of batch with padding (the padding index is 1, and the padding is applied before the sentences start).

path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
iter_dl = iter(data.train_dl)
_ = next(iter_dl)
x,y = next(iter_dl)
x[:20,-10:]
tensor([[   1,    1,    1,    1,    1,    1,    1,    1,    1,    1],
        [   1,    1,    1,    1,    1,    1,    1,    1,    1,    1],
        [   1,    1,    1,    1,    1,    1,    1,    1,    1,    1],
        [   1,    1,    1,    1,    1,    1,    1,    1,    1,    1],
        [   1,    1,    1,    1,    1,    1,    1,    1,    1,    1],
        [  43,   43,    1,    1,    1,    1,    1,    1,    1,    1],
        [  40,   40,   43,   43,   43,   43,   43,   43,   43,    1],
        [   2,   10,   40,   40,   40,   40,   40,   40,   40,   43],
        [1061,    9,  297, 5400,    2,   14,   12,    7,   12,   40],
        [  18,  667,   89,  263,   75,    9,  273,   41,  103,    2],
        [  65,    8,  462,   47,  465,    6,   14,    2,   29, 1632],
        [   3, 5047,   47, 2667,   13,   81,   70,  120,  264,  135],
        [   2,   14,  155, 1115, 4282,  229, 1531,   12,   10,    9],
        [5761,   51,    2,  246,   66,   20,   22,   36,   68,  567],
        [  18,  100,    0,   13,   14,    4, 4682,  137,   12,   56],
        [  65,  102,    3,    9,   20,   10,    5,    3,  333, 1343],
        [   3, 5237,  248,   29,    9,    9, 6107,    5,   14,  181],
        [ 288,   25,    9,  522,    0,   46,  859,   13,   20,    3],
        [  33,    7,  487,   89,    4,  195,  286,   16,   11,   23],
        [ 596,    2,  248,  377,   10,   20,   41,  112,   77,    6]],
       device='cuda:0')

This is all done internally when we use TextClasDataBunch, by using the following classes:

class SortSampler[source]

SortSampler(data_source:NPArrayList, key:KeyFunc) :: Sampler

pytorch Sampler to batchify the data_source by order of length of the texts. Used for the validation and (if applicable) the test set.

class SortishSampler[source]

SortishSampler(data_source:NPArrayList, key:KeyFunc, bs:int) :: Sampler

pytorch Sampler to batchify with size bs the data_source by order of length of the texts with a bit of randomness. Used for the training set.

pad_collate[source]

pad_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True) → Tuple[LongTensor, LongTensor]

Function used by the pytorch DataLoader to collate the samples in batches while adding padding with pad_idx. If pad_first is True, padding is applied at the beginning (before the sentence starts) otherwise it's applied at the end.