= tensor([0,1,2])
t = reverse_text(t)
r 2,1,0])) test_eq(r, tensor([
Text data
Datasets
Backwards
Reversing the text can provide higher accuracy with an ensemble with a forward model. All that is needed is a type_tfm
that will reverse the text as it is brought in:
reverse_text
reverse_text (x)
Numericalizing
Numericalization is the step in which we convert tokens to integers. The first step is to build a correspondence token to index that is called a vocab.
make_vocab
make_vocab (count, min_freq=3, max_vocab=60000, special_toks=None)
Create a vocab of max_vocab
size from Counter
count
with items present more than min_freq
If there are more than max_vocab
tokens, the ones kept are the most frequent.
For performance when using mixed precision, the vocabulary is always made of size a multiple of 8, potentially by adding xxfake
tokens.
= Counter(['a', 'a', 'a', 'a', 'b', 'b', 'c', 'c', 'd'])
count set([x for x in make_vocab(count) if not x.startswith('xxfake')]),
test_eq(set(defaults.text_spec_tok + 'a'.split()))
len(make_vocab(count))%8, 0)
test_eq(set([x for x in make_vocab(count, min_freq=1) if not x.startswith('xxfake')]),
test_eq(set(defaults.text_spec_tok + 'a b c d'.split()))
set([x for x in make_vocab(count,max_vocab=12, min_freq=1) if not x.startswith('xxfake')]),
test_eq(set(defaults.text_spec_tok + 'a b c'.split()))
LMTensorText
LMTensorText (x, **kwargs)
Semantic type for a tensor representing text in language modeling
TensorText
TensorText (x, **kwargs)
Semantic type for a tensor representing text
Numericalize
Numericalize (vocab=None, min_freq=3, max_vocab=60000, special_toks=None)
Reversible transform of tokenized texts to numericalized ids
= Numericalize(min_freq=2)
num 'This is an example of text'.split(), 'this is another text'.split())) num.setup(L(
= 'This is an example of text ' start
If no vocab
is passed, one is created at setup from the data, using make_vocab
with min_freq
and max_vocab
.
= 'This is an example of text'
start = Numericalize(min_freq=1)
num 'this is another text'.split()))
num.setup(L(start.split(), set([x for x in num.vocab if not x.startswith('xxfake')]),
test_eq(set(defaults.text_spec_tok + 'This is an example of text this another'.split()))
len(num.vocab)%8, 0)
test_eq(= num(start.split())
t
11, 9, 12, 13, 14, 10]))
test_eq(t, tensor([ test_eq(num.decode(t), start.split())
= Numericalize(min_freq=2)
num 'This is an example of text'.split(), 'this is another text'.split()))
num.setup(L(set([x for x in num.vocab if not x.startswith('xxfake')]),
test_eq(set(defaults.text_spec_tok + 'is text'.split()))
len(num.vocab)%8, 0)
test_eq(= num(start.split())
t 0, 9, 0, 0, 0, 10]))
test_eq(t, tensor([f'{UNK} is {UNK} {UNK} {UNK} text'.split()) test_eq(num.decode(t),
LMDataLoader
LMDataLoader (dataset, lens=None, cache=2, bs=64, seq_len=72, num_workers=0, shuffle:bool=False, verbose:bool=False, do_setup:bool=True, pin_memory=False, timeout=0, batch_size=None, drop_last=False, indexed=None, n=None, device=None, persistent_workers=False, pin_memory_device='', wif=None, before_iter=None, after_item=None, before_batch=None, after_batch=None, after_iter=None, create_batches=None, create_item=None, create_batch=None, retain=None, get_idxs=None, sample=None, shuffle_fn=None, do_batch=None)
A DataLoader
suitable for language modeling
dataset
should be a collection of numericalized texts for this to work. lens
can be passed for optimizing the creation, otherwise, the LMDataLoader
will do a full pass of the dataset
to compute them. cache
is used to avoid reloading items unnecessarily.
The LMDataLoader
will concatenate all texts (maybe shuffle
d) in one big stream, split it in bs
contiguous sentences, then go through those seq_len
at a time.
= 4,3
bs,sl = L([0,1,2,3,4],[5,6,7,8,9,10],[11,12,13,14,15,16,17,18],[19,20],[21,22,23],[24]).map(tensor) ints
= LMDataLoader(ints, bs=bs, seq_len=sl)
dl list(dl),
test_eq(0, 1, 2], [6, 7, 8], [12, 13, 14], [18, 19, 20]]),
[[tensor([[1, 2, 3], [7, 8, 9], [13, 14, 15], [19, 20, 21]])],
tensor([[3, 4, 5], [ 9, 10, 11], [15, 16, 17], [21, 22, 23]]),
[tensor([[4, 5, 6], [10, 11, 12], [16, 17, 18], [22, 23, 24]])]]) tensor([[
= LMDataLoader(ints, bs=bs, seq_len=sl, shuffle=True)
dl for x,y in dl: test_eq(x[:,1:], y[:,:-1])
= tuple(dl)
((x0,y0), (x1,y1)) #Second batch begins where first batch ended
-1], x1[:,0])
test_eq(y0[:,type(x0), LMTensorText) test_eq(
Classification
For classification, we deal with the fact that texts don’t all have the same length by using padding.
Pad_Input
Pad_Input (enc=None, dec=None, split_idx=None, order=None)
A transform that always take tuples as items
pad_idx
is used for the padding, and the padding is applied to the pad_fields
of the samples. The padding is applied at the beginning if pad_first
is True
, and if backwards
is added, the tensors are flipped.
1,2,3]),1), (tensor([4,5]), 2), (tensor([6]), 3)], pad_idx=0),
test_eq(pad_input([(tensor([1,2,3]),1), (tensor([4,5,0]),2), (tensor([6,0,0]), 3)])
[(tensor([1,2,3]), (tensor([6]))), (tensor([4,5]), tensor([4,5])), (tensor([6]), (tensor([1,2,3])))], pad_idx=0, pad_fields=1),
test_eq(pad_input([(tensor([1,2,3]),(tensor([6,0,0]))), (tensor([4,5]),tensor([4,5,0])), ((tensor([6]),tensor([1, 2, 3])))])
[(tensor([1,2,3]),1), (tensor([4,5]), 2), (tensor([6]), 3)], pad_idx=0, pad_first=True),
test_eq(pad_input([(tensor([1,2,3]),1), (tensor([0,4,5]),2), (tensor([0,0,6]), 3)])
[(tensor([1,2,3]),1), (tensor([4,5]), 2), (tensor([6]), 3)], pad_idx=0, backwards=True),
test_eq(pad_input([(tensor([3,2,1]),1), (tensor([5,4,0]),2), (tensor([6,0,0]), 3)])
[(tensor([= pad_input([(TensorText([1,2,3]),1), (TensorText([4,5]), 2), (TensorText([6]), 3)], pad_idx=0)
x 1,2,3]),1), (tensor([4,5,0]), 2), (tensor([6,0,0]), 3)])
test_eq(x, [(tensor([1][0]), tensor([4,5])) test_eq(pad_input.decode(x[
Pads x
with pad_idx
to length pad_len
. If pad_first
is false, all padding is appended to x
, until x
is len pad_len
. Otherwise ff pad_first
is true, then chunks of size seq_len
are prepended to x
, the remainder of the padding is appended to x
.
pad_chunk
pad_chunk (x, pad_idx=1, pad_first=True, seq_len=72, pad_len=10)
Pad x
by adding padding by chunks of size seq_len
print('pad_first: ',pad_chunk(torch.tensor([1,2,3]),seq_len=3,pad_idx=0,pad_len=8))
print('pad_last: ',pad_chunk(torch.tensor([1,2,3]),seq_len=3,pad_idx=0,pad_len=8,pad_first=False))
pad_first: tensor([0, 0, 0, 1, 2, 3, 0, 0])
pad_last: tensor([1, 2, 3, 0, 0, 0, 0, 0])
pad_input_chunk
is the version of pad_chunk
that works over a list of lists.
pad_input_chunk
pad_input_chunk (samples, n_inp=1, pad_idx=1, pad_first=True, seq_len=72, pad_len=10)
Pad samples
by adding padding by chunks of size seq_len
The difference with the base pad_input
is that most of the padding is applied first (if pad_first=True
) or at the end (if pad_first=False
) but only by a round multiple of seq_len
. The rest of the padding is applied to the end (or the beginning if pad_first=False
). This is to work with SequenceEncoder
with recurrent models.
1,2,3,4,5,6]),TensorText([1,2]),1)], pad_idx=0, seq_len=3,n_inp=2) pad_input_chunk([(TensorText([
[(TensorText([1, 2, 3, 4, 5, 6]), TensorText([0, 0, 0, 1, 2, 0]), 1)]
1,2,3,4,5,6]),1), (tensor([1,2,3]), 2), (tensor([1,2]), 3)], pad_idx=0, seq_len=2),
test_eq(pad_input_chunk([(tensor([1,2,3,4,5,6]),1), (tensor([0,0,1,2,3,0]),2), (tensor([0,0,0,0,1,2]), 3)])
[(tensor([1,2,3,4,5,6]),), (tensor([1,2,3]),), (tensor([1,2]),)], pad_idx=0, seq_len=2),
test_eq(pad_input_chunk([(tensor([1,2,3,4,5,6]),), (tensor([0,0,1,2,3,0]),), (tensor([0,0,0,0,1,2]),)])
[(tensor([1,2,3,4,5,6]),), (tensor([1,2,3]),), (tensor([1,2]),)], pad_idx=0, seq_len=2, pad_first=False),
test_eq(pad_input_chunk([(tensor([1,2,3,4,5,6]),), (tensor([1,2,3,0,0,0]),), (tensor([1,2,0,0,0,0]),)])
[(tensor([
1,2,3,4,5,6]),TensorText([1,2]),1)], pad_idx=0, seq_len=2,n_inp=2),
test_eq(pad_input_chunk([(TensorText([1,2,3,4,5,6]),TensorText([0,0,0,0,1,2]),1)]) [(TensorText([
Transform
version of pad_input_chunk
. This version supports types, decoding, and the other functionality of Transform
Pad_Chunk
Pad_Chunk (pad_idx=1, pad_first=True, seq_len=72, decode=True, **kwargs)
Pad samples
by adding padding by chunks of size seq_len
Here is an example of Pad_Chunk
=Pad_Chunk(pad_idx=0,seq_len=3)
pc=pc([(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)])
outprint('Inputs: ',*[(TensorText([1,2,3,4,5,6]),TensorText([1,2]),1)])
print('Encoded: ',*out)
print('Decoded: ',*pc.decode(out))
Inputs: (TensorText([1, 2, 3, 4, 5, 6]), TensorText([1, 2]), 1)
Encoded: (TensorText([1, 2, 3, 4, 5, 6]), TensorText([0, 0, 0, 1, 2, 0]), 1)
Decoded: (TensorText([1, 2, 3, 4, 5, 6]), TensorText([1, 2]), 1)
=Pad_Chunk(pad_idx=0, seq_len=2)
pc1,2,3,4,5,6]),1), (TensorText([1,2,3]), 2), (TensorText([1,2]), 3)]),
test_eq(pc([(TensorText([1,2,3,4,5,6]),1), (tensor([0,0,1,2,3,0]),2), (tensor([0,0,0,0,1,2]), 3)])
[(tensor([
=Pad_Chunk(pad_idx=0, seq_len=2)
pc1,2,3,4,5,6]),), (TensorText([1,2,3]),), (TensorText([1,2]),)]),
test_eq(pc([(TensorText([1,2,3,4,5,6]),), (tensor([0,0,1,2,3,0]),), (tensor([0,0,0,0,1,2]),)])
[(tensor([
=Pad_Chunk(pad_idx=0, seq_len=2, pad_first=False)
pc1,2,3,4,5,6]),), (TensorText([1,2,3]),), (TensorText([1,2]),)]),
test_eq(pc([(TensorText([1,2,3,4,5,6]),), (tensor([1,2,3,0,0,0]),), (tensor([1,2,0,0,0,0]),)])
[(tensor([
=Pad_Chunk(pad_idx=0, seq_len=2)
pc1,2,3,4,5,6]),TensorText([1,2]),1)]),
test_eq(pc([(TensorText([1,2,3,4,5,6]),TensorText([0,0,0,0,1,2]),1)]) [(TensorText([
SortedDL
SortedDL (dataset, sort_func=None, res=None, bs:int=64, shuffle:bool=False, num_workers:int=None, verbose:bool=False, do_setup:bool=True, pin_memory=False, timeout=0, batch_size=None, drop_last=False, indexed=None, n=None, device=None, persistent_workers=False, pin_memory_device='', wif=None, before_iter=None, after_item=None, before_batch=None, after_batch=None, after_iter=None, create_batches=None, create_item=None, create_batch=None, retain=None, get_idxs=None, sample=None, shuffle_fn=None, do_batch=None)
A DataLoader
that goes throught the item in the order given by sort_func
Type | Default | Details | |
---|---|---|---|
dataset | Map- or iterable-style dataset from which to load the data | ||
sort_func | NoneType | None | |
res | NoneType | None | |
bs | int | 64 | Size of batch |
shuffle | bool | False | Whether to shuffle data |
num_workers | int | None | Number of CPU cores to use in parallel (default: All available up to 16) |
verbose | bool | False | Whether to print verbose logs |
do_setup | bool | True | Whether to run setup() for batch transform(s) |
pin_memory | bool | False | |
timeout | int | 0 | |
batch_size | NoneType | None | |
drop_last | bool | False | |
indexed | NoneType | None | |
n | NoneType | None | |
device | NoneType | None | |
persistent_workers | bool | False | |
pin_memory_device | str | ||
wif | NoneType | None | |
before_iter | NoneType | None | |
after_item | NoneType | None | |
before_batch | NoneType | None | |
after_batch | NoneType | None | |
after_iter | NoneType | None | |
create_batches | NoneType | None | |
create_item | NoneType | None | |
create_batch | NoneType | None | |
retain | NoneType | None | |
get_idxs | NoneType | None | |
sample | NoneType | None | |
shuffle_fn | NoneType | None | |
do_batch | NoneType | None |
res
is the result of sort_func
applied on all elements of the dataset
. You can pass it if available to make the init much faster by avoiding an initial pass over the whole dataset. For example if sorting by text length (as in the default sort_func
, called _default_sort
) you should pass a list with the length of each element in dataset
to res
to take advantage of this speed-up.
To get the same init speed-up for the validation set, val_res
(a list of text lengths for your validation set) can be passed to the kwargs
argument of SortedDL
. Below is an example to reduce the init time by passing a list of text lengths for both the training set and the validation set:
# Pass the training dataset text lengths to SortedDL
srtd_dl=partial(SortedDL, res = train_text_lens)
# Pass the validation dataset text lengths
dl_kwargs = [{},{'val_res': val_text_lens}]
# init our Datasets
dsets = Datasets(...)
# init our Dataloaders
dls = dsets.dataloaders(...,dl_type = srtd_dl, dl_kwargs = dl_kwargs)
If shuffle
is True
, this will shuffle a bit the results of the sort to have items of roughly the same size in batches, but not in the exact sorted order.
= [(tensor([1,2]),1), (tensor([3,4,5,6]),2), (tensor([7]),3), (tensor([8,9,10]),4)]
ds = SortedDL(ds, bs=2, before_batch=partial(pad_input, pad_idx=0))
dl list(dl), [(tensor([[ 3, 4, 5, 6], [ 8, 9, 10, 0]]), tensor([2, 4])),
test_eq(1, 2], [7, 0]]), tensor([1, 3]))]) (tensor([[
= [(tensor(range(random.randint(1,10))),i) for i in range(101)]
ds = SortedDL(ds, bs=2, create_batch=partial(pad_input, pad_idx=-1), shuffle=True, num_workers=0)
dl = list(dl)
batches = len(batches[0][0])
max_len for b in batches:
assert(len(b[0])) <= max_len
0][-1], -1) test_ne(b[
TransformBlock for text
To use the data block API, you will need this build block for texts.
TextBlock
TextBlock (tok_tfm, vocab=None, is_lm=False, seq_len=72, backwards=False, min_freq=3, max_vocab=60000, special_toks=None)
A TransformBlock
for texts
For efficient tokenization, you probably want to use one of the factory methods. Otherwise, you can pass your custom tok_tfm
that will deal with tokenization (if your texts are already tokenized, you can pass noop
), a vocab
, or leave it to be inferred on the texts using min_freq
and max_vocab
.
is_lm
indicates if we want to use texts for language modeling or another task, seq_len
is only necessary to tune if is_lm=False
, and is passed along to pad_input_chunk
.
TextBlock.from_df
TextBlock.from_df (text_cols, vocab=None, is_lm=False, seq_len=72, backwards=False, min_freq=3, max_vocab=60000, tok=None, rules=None, sep=' ', n_workers=4, mark_fields=None, tok_text_col='text', **kwargs)
Build a TextBlock
from a dataframe using text_cols
Here is an example using a sample of IMDB stored as a CSV file:
= untar_data(URLs.IMDB_SAMPLE)
path = pd.read_csv(path/'texts.csv')
df
= DataBlock(
imdb_clas =(TextBlock.from_df('text', seq_len=72), CategoryBlock),
blocks=ColReader('text'), get_y=ColReader('label'), splitter=ColSplitter())
get_x
= imdb_clas.dataloaders(df, bs=64)
dls =2) dls.show_batch(max_n
text | category | |
---|---|---|
0 | xxbos xxmaj raising xxmaj victor xxmaj vargas : a xxmaj review \n\n xxmaj you know , xxmaj raising xxmaj victor xxmaj vargas is like sticking your hands into a big , xxunk bowl of xxunk . xxmaj it 's warm and gooey , but you 're not sure if it feels right . xxmaj try as i might , no matter how warm and gooey xxmaj raising xxmaj victor xxmaj vargas became i was always aware that something did n't quite feel right . xxmaj victor xxmaj vargas suffers from a certain xxunk on the director 's part . xxmaj apparently , the director thought that the ethnic backdrop of a xxmaj latino family on the lower east side , and an xxunk storyline would make the film critic proof . xxmaj he was right , but it did n't fool me . xxmaj raising xxmaj victor xxmaj vargas is | negative |
1 | xxbos xxup the xxup shop xxup around xxup the xxup corner is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there 's just no getting around that , and it 's hard to actually put one 's feeling for this film into words . xxmaj it 's not one of those films that tries too hard , nor does it come up with the xxunk possible scenarios to get the two protagonists together in the end . xxmaj in fact , all its charm is xxunk , contained within the characters and the setting and the plot … which is highly believable to xxunk . xxmaj it 's easy to think that such a love story , as beautiful as any other ever told , * could * happen to you … a feeling you do n't often get from other romantic comedies | positive |
vocab
, is_lm
, seq_len
, min_freq
and max_vocab
are passed to the main init, the other argument to Tokenizer.from_df
.
TextBlock.from_folder
TextBlock.from_folder (path, vocab=None, is_lm=False, seq_len=72, backwards=False, min_freq=3, max_vocab=60000, tok=None, rules=None, extensions=None, folders=None, output_dir=None, skip_if_exists=True, output_names=None, n_workers=4, encoding='utf8', **kwargs)
Build a TextBlock
from a path
vocab
, is_lm
, seq_len
, min_freq
and max_vocab
are passed to the main init, the other argument to Tokenizer.from_folder
.
TextDataLoaders
TextDataLoaders (*loaders, path:str|pathlib.Path='.', device=None)
Basic wrapper around several DataLoader
s with factory methods for NLP problems
You should not use the init directly but one of the following factory methods. All those factory methods accept as arguments:
text_vocab
: the vocabulary used for numericalizing texts (if not passed, it’s inferred from the data)tok_tfm
: if passed, uses thistok_tfm
instead of the defaultseq_len
: the sequence length used for batchbs
: the batch sizeval_bs
: the batch size for the validationDataLoader
(defaults tobs
)shuffle_train
: if we shuffle the trainingDataLoader
or notdevice
: the PyTorch device to use (defaults todefault_device()
)
TextDataLoaders.from_folder
TextDataLoaders.from_folder (path, train='train', valid='valid', valid_pct=None, seed=None, vocab=None, text_vocab=None, is_lm=False, tok_tfm=None, seq_len=72, splitter=None, backwards=False, bs:int=64, val_bs:int=None, shuffle:bool=True, device=None)
Create from imagenet style dataset in path
with train
and valid
subfolders (or provide valid_pct
)
Type | Default | Details | |
---|---|---|---|
path | str | pathlib.Path | . | Path to put in DataLoaders |
train | str | train | |
valid | str | valid | |
valid_pct | NoneType | None | |
seed | NoneType | None | |
vocab | NoneType | None | |
text_vocab | NoneType | None | |
is_lm | bool | False | |
tok_tfm | NoneType | None | |
seq_len | int | 72 | |
splitter | NoneType | None | |
backwards | bool | False | |
bs | int | 64 | Size of batch |
val_bs | int | None | Size of batch for validation DataLoader |
shuffle | bool | True | Whether to shuffle data |
device | NoneType | None | Device to put DataLoaders |
If valid_pct
is provided, a random split is performed (with an optional seed
) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a vocab
is passed, only the folders with names in vocab
are kept.
Here is an example on a sample of the IMDB movie review dataset:
= untar_data(URLs.IMDB)
path = TextDataLoaders.from_folder(path)
dls =3) dls.show_batch(max_n
text | category | |
---|---|---|
0 | xxbos xxmaj match 1 : xxmaj tag xxmaj team xxmaj table xxmaj match xxmaj bubba xxmaj ray and xxmaj spike xxmaj dudley vs xxmaj eddie xxmaj guerrero and xxmaj chris xxmaj benoit xxmaj bubba xxmaj ray and xxmaj spike xxmaj dudley started things off with a xxmaj tag xxmaj team xxmaj table xxmaj match against xxmaj eddie xxmaj guerrero and xxmaj chris xxmaj benoit . xxmaj according to the rules of the match , both opponents have to go through tables in order to get the win . xxmaj benoit and xxmaj guerrero heated up early on by taking turns hammering first xxmaj spike and then xxmaj bubba xxmaj ray . a xxmaj german xxunk by xxmaj benoit to xxmaj bubba took the wind out of the xxmaj dudley brother . xxmaj spike tried to help his brother , but the referee restrained him while xxmaj benoit and xxmaj guerrero | pos |
1 | xxbos xxmaj okay , so xxmaj i 'm not a big video game buff , but was the game xxmaj house of the xxmaj dead really famous enough to make a movie from ? xxmaj sure , they went as far as to actually put in quick video game clips throughout the movie , as though justifying any particular scene of violence , but there are dozens and dozens of games that look exactly the same , with the hand in the bottom on the screen , supposedly your own , holding whatever weapon and goo - ing all kinds of aliens or walking dead or snipers or whatever the case may be . \n\n xxmaj it 's an interesting premise in xxmaj house of the xxmaj dead , with a lot of college kids ( loaded college kids , as it were , kids who are able to pay | neg |
2 | xxbos xxup anchors xxup aweigh sees two eager young sailors , xxmaj joe xxmaj brady ( gene xxmaj kelly ) and xxmaj clarence xxmaj doolittle / xxmaj brooklyn ( frank xxmaj sinatra ) , get a special four - day shore leave . xxmaj eager to get to the girls , particularly xxmaj joe 's xxmaj lola , neither xxmaj joe nor xxmaj brooklyn figure on the interruption of little xxmaj navy - mad xxmaj donald ( dean xxmaj stockwell ) and his xxmaj aunt xxmaj susie ( kathryn xxmaj grayson ) . xxmaj unexperienced in the ways of females and courting , xxmaj brooklyn quickly enlists xxmaj joe to help him win xxmaj aunt xxmaj susie over . xxmaj along the way , however , xxmaj joe finds himself falling for the gal he thinks belongs to his best friend . xxmaj how is xxmaj brooklyn going to take | pos |
TextDataLoaders.from_df
TextDataLoaders.from_df (df, path='.', valid_pct=0.2, seed=None, text_col=0, label_col=1, label_delim=None, y_block=None, text_vocab=None, is_lm=False, valid_col=None, tok_tfm=None, tok_text_col='text', seq_len=72, backwards=False, bs:int=64, val_bs:int=None, shuffle:bool=True, device=None)
Create from df
in path
with valid_pct
Type | Default | Details | |
---|---|---|---|
df | |||
path | str | pathlib.Path | . | Path to put in DataLoaders |
valid_pct | float | 0.2 | |
seed | NoneType | None | |
text_col | int | 0 | |
label_col | int | 1 | |
label_delim | NoneType | None | |
y_block | NoneType | None | |
text_vocab | NoneType | None | |
is_lm | bool | False | |
valid_col | NoneType | None | |
tok_tfm | NoneType | None | |
tok_text_col | str | text | |
seq_len | int | 72 | |
backwards | bool | False | |
bs | int | 64 | Size of batch |
val_bs | int | None | Size of batch for validation DataLoader |
shuffle | bool | True | Whether to shuffle data |
device | NoneType | None | Device to put DataLoaders |
seed
can optionally be passed for reproducibility. text_col
, label_col
and optionally valid_col
are indices or names of columns for texts/labels and the validation flag. label_delim
can be passed for a multi-label problem if your labels are in one column, separated by a particular char. y_block
should be passed to indicate your type of targets, in case the library did no infer it properly.
Along with this, you can specify the specific column the tokenized text are sent to with tok_text_col
. By default they are stored in a column named text
after tokenizing.
Here are examples on subsets of IMDB:
= untar_data(URLs.IMDB_SAMPLE) path
= pd.read_csv(path/"texts.csv"); df.head() df
label | text | is_valid | |
---|---|---|---|
0 | negative | Un-bleeping-believable! Meg Ryan doesn't even look her usual pert lovable self in this, which normally makes me forgive her shallow ticky acting schtick. Hard to believe she was the producer on this dog. Plus Kevin Kline: what kind of suicide trip has his career been on? Whoosh... Banzai!!! Finally this was directed by the guy who did Big Chill? Must be a replay of Jonestown - hollywood style. Wooofff! | False |
1 | positive | This is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.<br /><br />But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is som... | False |
2 | negative | Every once in a long while a movie will come along that will be so awful that I feel compelled to warn people. If I labor all my days and I can save but one soul from watching this movie, how great will be my joy.<br /><br />Where to begin my discussion of pain. For starters, there was a musical montage every five minutes. There was no character development. Every character was a stereotype. We had swearing guy, fat guy who eats donuts, goofy foreign guy, etc. The script felt as if it were being written as the movie was being shot. The production value was so incredibly low that it felt li... | False |
3 | positive | Name just says it all. I watched this movie with my dad when it came out and having served in Korea he had great admiration for the man. The disappointing thing about this film is that it only concentrate on a short period of the man's life - interestingly enough the man's entire life would have made such an epic bio-pic that it is staggering to imagine the cost for production.<br /><br />Some posters elude to the flawed characteristics about the man, which are cheap shots. The theme of the movie "Duty, Honor, Country" are not just mere words blathered from the lips of a high-brassed offic... | False |
4 | negative | This movie succeeds at being one of the most unique movies you've seen. However this comes from the fact that you can't make heads or tails of this mess. It almost seems as a series of challenges set up to determine whether or not you are willing to walk out of the movie and give up the money you just paid. If you don't want to feel slighted you'll sit through this horrible film and develop a real sense of pity for the actors involved, they've all seen better days, but then you realize they actually got paid quite a bit of money to do this and you'll lose pity for them just like you've alr... | False |
= untar_data(URLs.IMDB_SAMPLE)
path = pd.read_csv(path/"texts.csv")
df = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid')
dls =3) dls.show_batch(max_n
text | category | |
---|---|---|
0 | xxbos xxmaj raising xxmaj victor xxmaj vargas : a xxmaj review \n\n xxmaj you know , xxmaj raising xxmaj victor xxmaj vargas is like sticking your hands into a big , xxunk bowl of xxunk . xxmaj it 's warm and gooey , but you 're not sure if it feels right . xxmaj try as i might , no matter how warm and gooey xxmaj raising xxmaj victor xxmaj vargas became i was always aware that something did n't quite feel right . xxmaj victor xxmaj vargas suffers from a certain xxunk on the director 's part . xxmaj apparently , the director thought that the ethnic backdrop of a xxmaj latino family on the lower east side , and an xxunk storyline would make the film critic proof . xxmaj he was right , but it did n't fool me . xxmaj raising xxmaj victor xxmaj vargas is | negative |
1 | xxbos xxup the xxup shop xxup around xxup the xxup corner is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there 's just no getting around that , and it 's hard to actually put one 's feeling for this film into words . xxmaj it 's not one of those films that tries too hard , nor does it come up with the xxunk possible scenarios to get the two protagonists together in the end . xxmaj in fact , all its charm is xxunk , contained within the characters and the setting and the plot … which is highly believable to xxunk . xxmaj it 's easy to think that such a love story , as beautiful as any other ever told , * could * happen to you … a feeling you do n't often get from other romantic comedies | positive |
2 | xxbos xxmaj now that xxmaj che(2008 ) has finished its relatively short xxmaj australian cinema run ( extremely limited xxunk screen in xxmaj xxunk , after xxunk ) , i can xxunk join both xxunk of " at xxmaj the xxmaj movies " in taking xxmaj steven xxmaj soderbergh to task . \n\n xxmaj it 's usually satisfying to watch a film director change his style / subject , but xxmaj soderbergh 's most recent stinker , xxmaj the xxmaj girlfriend xxmaj xxunk ) , was also missing a story , so narrative ( and editing ? ) seem to suddenly be xxmaj soderbergh 's main challenge . xxmaj strange , after 20 - odd years in the business . xxmaj he was probably never much good at narrative , just xxunk it well inside " edgy " projects . \n\n xxmaj none of this excuses him this present , | negative |
= TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid')
dls =3) dls.show_batch(max_n
text | text_ | |
---|---|---|
0 | xxbos xxmaj critics need to review what they class as a quality movie . i think the critics have seen too many actions films and have xxunk to the xxmaj matrix style of films . xxmaj xxunk is a breath of fresh air , a film with so many layers that one viewing is not enough to understand or appreciate this outstanding film . xxmaj xxunk von xxmaj xxunk shows that old | xxmaj critics need to review what they class as a quality movie . i think the critics have seen too many actions films and have xxunk to the xxmaj matrix style of films . xxmaj xxunk is a breath of fresh air , a film with so many layers that one viewing is not enough to understand or appreciate this outstanding film . xxmaj xxunk von xxmaj xxunk shows that old styles |
1 | xxmaj xxunk is something ) , but noticeable moments of xxunk as he still struggles to find his humanity . xxmaj this xxunk of his for a real life could get boring , and almost did in xxmaj supremacy , but just works better in xxmaj ultimatum ( better script ) . \n\n i am reminded of a scene in " xxunk " ( the only good xxmaj pierce xxmaj xxunk xxmaj | xxunk is something ) , but noticeable moments of xxunk as he still struggles to find his humanity . xxmaj this xxunk of his for a real life could get boring , and almost did in xxmaj supremacy , but just works better in xxmaj ultimatum ( better script ) . \n\n i am reminded of a scene in " xxunk " ( the only good xxmaj pierce xxmaj xxunk xxmaj bond |
2 | xxmaj mr . xxmaj julia , played his role equally as perfect . xxmaj it was interesting to see how reluctant xxmaj richard xxmaj dreyfuss was in replacing the dictator against his will . xxmaj but he became more confident and comfortable with the role as time passed . xxmaj since everything happens for a reason in life , i believe he was forced to replace the dictator because he was meant | mr . xxmaj julia , played his role equally as perfect . xxmaj it was interesting to see how reluctant xxmaj richard xxmaj dreyfuss was in replacing the dictator against his will . xxmaj but he became more confident and comfortable with the role as time passed . xxmaj since everything happens for a reason in life , i believe he was forced to replace the dictator because he was meant to |
TextDataLoaders.from_csv
TextDataLoaders.from_csv (path, csv_fname='labels.csv', header='infer', delimiter=None, quoting=0, valid_pct=0.2, seed=None, text_col=0, label_col=1, label_delim=None, y_block=None, text_vocab=None, is_lm=False, valid_col=None, tok_tfm=None, tok_text_col='text', seq_len=72, backwards=False, bs:int=64, val_bs:int=None, shuffle:bool=True, device=None)
Create from csv
file in path/csv_fname
Type | Default | Details | |
---|---|---|---|
path | str | pathlib.Path | . | Path to put in DataLoaders |
csv_fname | str | labels.csv | |
header | str | infer | |
delimiter | NoneType | None | |
quoting | int | 0 | |
valid_pct | float | 0.2 | |
seed | NoneType | None | |
text_col | int | 0 | |
label_col | int | 1 | |
label_delim | NoneType | None | |
y_block | NoneType | None | |
text_vocab | NoneType | None | |
is_lm | bool | False | |
valid_col | NoneType | None | |
tok_tfm | NoneType | None | |
tok_text_col | str | text | |
seq_len | int | 72 | |
backwards | bool | False | |
bs | int | 64 | Size of batch |
val_bs | int | None | Size of batch for validation DataLoader |
shuffle | bool | True | Whether to shuffle data |
device | NoneType | None | Device to put DataLoaders |
Opens the csv file with header
and delimiter
, then pass all the other arguments to TextDataLoaders.from_df
.
= TextDataLoaders.from_csv(path=path, csv_fname='texts.csv', text_col='text', label_col='label', valid_col='is_valid')
dls =3) dls.show_batch(max_n
text | category | |
---|---|---|
0 | xxbos xxmaj raising xxmaj victor xxmaj vargas : a xxmaj review \n\n xxmaj you know , xxmaj raising xxmaj victor xxmaj vargas is like sticking your hands into a big , xxunk bowl of xxunk . xxmaj it 's warm and gooey , but you 're not sure if it feels right . xxmaj try as i might , no matter how warm and gooey xxmaj raising xxmaj victor xxmaj vargas became i was always aware that something did n't quite feel right . xxmaj victor xxmaj vargas suffers from a certain xxunk on the director 's part . xxmaj apparently , the director thought that the ethnic backdrop of a xxmaj latino family on the lower east side , and an xxunk storyline would make the film critic proof . xxmaj he was right , but it did n't fool me . xxmaj raising xxmaj victor xxmaj vargas is | negative |
1 | xxbos xxup the xxup shop xxup around xxup the xxup corner is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there 's just no getting around that , and it 's hard to actually put one 's feeling for this film into words . xxmaj it 's not one of those films that tries too hard , nor does it come up with the xxunk possible scenarios to get the two protagonists together in the end . xxmaj in fact , all its charm is xxunk , contained within the characters and the setting and the plot … which is highly believable to xxunk . xxmaj it 's easy to think that such a love story , as beautiful as any other ever told , * could * happen to you … a feeling you do n't often get from other romantic comedies | positive |
2 | xxbos xxmaj now that xxmaj che(2008 ) has finished its relatively short xxmaj australian cinema run ( extremely limited xxunk screen in xxmaj xxunk , after xxunk ) , i can xxunk join both xxunk of " at xxmaj the xxmaj movies " in taking xxmaj steven xxmaj soderbergh to task . \n\n xxmaj it 's usually satisfying to watch a film director change his style / subject , but xxmaj soderbergh 's most recent stinker , xxmaj the xxmaj girlfriend xxmaj xxunk ) , was also missing a story , so narrative ( and editing ? ) seem to suddenly be xxmaj soderbergh 's main challenge . xxmaj strange , after 20 - odd years in the business . xxmaj he was probably never much good at narrative , just xxunk it well inside " edgy " projects . \n\n xxmaj none of this excuses him this present , | negative |