NLP data processing; tokenizes text and creates vocab indexes

NLP Preprocessing

text.tranform contains the functions that deal behind the scenes with the two main tasks when preparing texts for modelling: tokenization and numericalization.

Tokenization splits the raw texts into tokens (wich can be words, or punctuation signs...). The most basic way to do this would be to separate according to spaces, but it's possible to be more subtle; for instance, the contractions like "isn't" or "don't" should be split in ["is","n't"] or ["do","n't"]. By default fastai will use the powerful spacy tokenizer.

Numericalization is easier as it just consists in attributing a unique id to each token and mapping each of those tokens to their respective ids.

Tokenization

Introduction

This step is actually divided in two phases: first, we apply a certain list of rules to the raw texts as preprocessing, then we use the tokenizer to split them in lists of tokens. Combining together those rules, the tok_funcand the lang to process the texts is the role of the Tokenizer class.

class Tokenizer[source]

Tokenizer(tok_func:Callable='SpacyTokenizer', lang:str='en', rules:ListRules=None, special_cases:StrList=None, n_cpus:int=None)

This class will process texts by appling them the rules then tokenizing them with tok_func(lang). special_cases are a list of tokens passed as special to the tokenizer and n_cpus is the number of cpus to use for multi-processing (by default, half the cpus available). We don't directly pass a tokenizer for multi-processing purposes: each process needs to initiate a tokenizer of its own. The rules and special_cases default to

default_rules = [fix_html, replace_rep, replace_wrep, deal_caps, spec_add_spaces, rm_useless_spaces]

and

default_spec_tok = [BOS, FLD, UNK, PAD]

process_text[source]

process_text(t:str, tok:BaseTokenizer) → List[str]

Processe one text t with tokenizer tok.

process_all[source]

process_all(texts:StrList) → List[List[str]]

Process a list of texts.

For an example, we're going to grab some IMDB reviews.

path = untar_data(URLs.IMDB_SAMPLE)
path
PosixPath('/home/ubuntu/.fastai/data/imdb_sample')
df = pd.read_csv(path/'texts.csv', header=None)
example_text = df.iloc[2][1]; example_text
'This is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.<br /><br />But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is some merit in this view, but it\'s also true that no one forced Hindus and Muslims in the region to mistreat each other as they did around the time of partition. It seems more likely that the British simply saw the tensions between the religions and were clever enough to exploit them to their own ends.<br /><br />The result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen. But it is never painted as a black-and-white case. There is baseness and nobility on both sides, and also the hope for change in the younger generation.<br /><br />There is redemption of a sort, in the end, when Puro has to make a hard choice between a man who has ruined her life, but also truly loved her, and her family which has disowned her, then later come looking for her. But by that point, she has no option that is without great pain for her.<br /><br />This film carries the message that both Muslims and Hindus have their grave faults, and also that both can be dignified and caring people. The reality of partition makes that realisation all the more wrenching, since there can never be real reconciliation across the India/Pakistan border. In that sense, it is similar to "Mr & Mrs Iyer".<br /><br />In the end, we were glad to have seen the film, even though the resolution was heartbreaking. If the UK and US could deal with their own histories of racism with this kind of frankness, they would certainly be better off.'
tokenizer = Tokenizer()
tok = SpacyTokenizer('en')
' '.join(tokenizer.process_text(example_text, tok))
'this is a extremely well - made film . the acting , script and camera - work are all first - rate . the music is good , too , though it is mostly early in the film , when things are still relatively cheery . there are no really superstars in the cast , though several faces will be familiar . the entire cast does an excellent job with the script . \n\n but it is hard to watch , because there is no good end to a situation like the one presented . it is now fashionable to blame the british for setting hindus and muslims against each other , and then cruelly separating them into two countries . there is some merit in this view , but it \'s also true that no one forced hindus and muslims in the region to mistreat each other as they did around the time of partition . it seems more likely that the british simply saw the tensions between the religions and were clever enough to exploit them to their own ends . \n\n the result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen . but it is never painted as a black - and - white case . there is baseness and nobility on both sides , and also the hope for change in the younger generation . \n\n there is redemption of a sort , in the end , when puro has to make a hard choice between a man who has ruined her life , but also truly loved her , and her family which has disowned her , then later come looking for her . but by that point , she has no option that is without great pain for her . \n\n this film carries the message that both muslims and hindus have their grave faults , and also that both can be dignified and caring people . the reality of partition makes that realisation all the more wrenching , since there can never be real reconciliation across the india / pakistan border . in that sense , it is similar to " mr & mrs iyer " . \n\n in the end , we were glad to have seen the film , even though the resolution was heartbreaking . if the uk and us could deal with their own histories of racism with this kind of frankness , they would certainly be better off .'

As explained before, the tokenizer split the text according to words/punctuations signs but in a smart manner. The rules (see below) also have modified the text a little bit. We can tokenize a list of texts directly at the same time:

df = pd.read_csv(path/'texts.csv', header=None)
texts = df[1].values
tokenizer = Tokenizer()
tokens = tokenizer.process_all(texts)
' '.join(tokens[2])
'this is a extremely well - made film . the acting , script and camera - work are all first - rate . the music is good , too , though it is mostly early in the film , when things are still relatively cheery . there are no really superstars in the cast , though several faces will be familiar . the entire cast does an excellent job with the script . \n\n but it is hard to watch , because there is no good end to a situation like the one presented . it is now fashionable to blame the british for setting hindus and muslims against each other , and then cruelly separating them into two countries . there is some merit in this view , but it \'s also true that no one forced hindus and muslims in the region to mistreat each other as they did around the time of partition . it seems more likely that the british simply saw the tensions between the religions and were clever enough to exploit them to their own ends . \n\n the result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen . but it is never painted as a black - and - white case . there is baseness and nobility on both sides , and also the hope for change in the younger generation . \n\n there is redemption of a sort , in the end , when puro has to make a hard choice between a man who has ruined her life , but also truly loved her , and her family which has disowned her , then later come looking for her . but by that point , she has no option that is without great pain for her . \n\n this film carries the message that both muslims and hindus have their grave faults , and also that both can be dignified and caring people . the reality of partition makes that realisation all the more wrenching , since there can never be real reconciliation across the india / pakistan border . in that sense , it is similar to " mr & mrs iyer " . \n\n in the end , we were glad to have seen the film , even though the resolution was heartbreaking . if the uk and us could deal with their own histories of racism with this kind of frankness , they would certainly be better off .'

Customize the tokenizer

The tok_func must return an instance of BaseTokenizer:

class BaseTokenizer[source]

BaseTokenizer(lang:str)

Basic class for a tokenizer function.

tokenizer[source]

tokenizer(t:str) → List[str]

Take a text t and returns the list of its tokens.

add_special_cases[source]

add_special_cases(toks:StrList)

Record a list of special tokens toks.

The fastai library uses spacy tokenizers as its default. The following class wraps it as BaseTokenizer.

class SpacyTokenizer[source]

SpacyTokenizer(lang:str) :: BaseTokenizer

Wrapper around a spacy tokenizer to make it a BaseTokenizer.

If you want to use your custom tokenizer, just subclass the BaseTokenizer and override its tokenizer and add_spec_cases functions.

Rules

Rules are just functions that take a string and return the modified string. This allows you to customize the list of default_rules as you please. Those default_rules are:

deal_caps[source]

deal_caps(t:str) → str

In t, if a word is written in all caps, we put it in a lower case and add a special token before. A model will more easily learn this way the meaning of the sentence. The rest of the capitals are removed.

deal_caps("I'm suddenly SHOUTING FOR NO REASON!")
"i'm suddenly  xxup shouting  xxup for no  xxup reason!"

fix_html[source]

fix_html(x:str) → str

This rules replaces a bunch of HTML characters or norms in plain text ones. For instance <br /> are replaced by \n, &nbsp; by spaces etc...

fix_html("Some HTML&nbsp;text<br />")
'Some HTML& text\n'

replace_rep[source]

replace_rep(t:str) → str

Whenever a character is repeated more than three times in t, we replace the whole thing by 'TK_REP n char' where n is the number of occurences and char the character.

replace_rep("I'm so excited!!!!!!!!")
"I'm so excited xxrep 8 ! "

replace_wrep[source]

replace_wrep(t:str) → str

Whenever a word is repeated more than four times in t, we replace the whole thing by 'TK_WREP n w' where n is the number of occurences and w the word repeated.

replace_wrep("I've never ever ever ever ever ever ever ever done this.")
"I've never  xxwrep 7 ever  done this."

rm_useless_spaces[source]

rm_useless_spaces(t:str) → str

Remove multiple spaces in t.

rm_useless_spaces("Inconsistent   use  of     spaces.")
'Inconsistent use of spaces.'

spec_add_spaces[source]

spec_add_spaces(t:str) → str

Add spaces around / and # in t.

spec_add_spaces('I #like to #put #hashtags #everywhere!')
'I  # like to  # put  # hashtags  # everywhere!'

Numericalization

To convert our set of tokens to unique ids (and be able to have them go through embeddings), we use the following class:

class Vocab[source]

Vocab(itos:Dict[int, str])

Contain the correspondance between numbers and tokens and numericalize. itos contains the id to token correspondance.

create[source]

create(tokens:Tokens, max_vocab:int, min_freq:int) → Vocab

Create a Vocab dictionary from a set of tokens. Only keeps max_vocab tokens, and only if they appear at least min_freq times, set the rest to UNK.

numericalize[source]

numericalize(t:StrList) → List[int]

Convert a list of tokens t to their ids.

textify[source]

textify(nums:Collection[int], sep=' ') → List[str]

Convert a list of nums to their tokens.

vocab = Vocab.create(tokens, max_vocab=1000, min_freq=2)
vocab.numericalize(tokens[2])[:10]
[14, 9, 6, 619, 85, 17, 110, 25, 4, 2]