The data block API

The data block API

The data block API lets you customize the creation of a DataBunch by isolating the underlying parts of that process in separate blocks, mainly:

  1. Where are the inputs and how to create them?
  2. How to split the data into a training and validation sets?
  3. How to label the inputs?
  4. What transforms to apply?
  5. How to add a test set?
  6. How to wrap in dataloaders and create the DataBunch?

Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a DataBunch (batch size, collate function...)

The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized DataBunch for training, validation and testing. The factory methods of the various DataBunch are great for beginners but you can't always make your data fit in the tracks they require.

Mix and match

As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.

Examples of use

Let's begin with our traditional MNIST example.

from fastai.vision import *
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
[PosixPath('/home/jupyter/.fastai/data/mnist_tiny/models'),
 PosixPath('/home/jupyter/.fastai/data/mnist_tiny/valid'),
 PosixPath('/home/jupyter/.fastai/data/mnist_tiny/test'),
 PosixPath('/home/jupyter/.fastai/data/mnist_tiny/labels.csv'),
 PosixPath('/home/jupyter/.fastai/data/mnist_tiny/train')]
(path/'train').ls()
[PosixPath('/home/jupyter/.fastai/data/mnist_tiny/train/7'),
 PosixPath('/home/jupyter/.fastai/data/mnist_tiny/train/3')]

In vision.data, we can create a DataBunch suitable for image classification by simply typing:

data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64)

This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the train and valid directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a test directory containing unlabelled pictures.

Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this:

data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders
        .split_by_folder()              #How to split in train/valid? -> use the folders
        .label_from_folder()            #How to label? -> depending on the folder of the filenames
        .add_test_folder()              #Optionally add a test set (here default name is test)
        .transform(tfms, size=64)       #Data augmentation? -> use tfms with a size of 64
        .databunch())                   #Finally? -> use the defaults for conversion to ImageDataBunch

Now we can look at the created DataBunch:

data.show_batch(3, figsize=(6,6), hide_axis=False)

Let's look at another example from vision.data with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:

planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
pd.read_csv(planet/"labels.csv").head()
image_name tags
0 train_31112 clear primary
1 train_4300 partly_cloudy primary water
2 train_39539 clear primary water
3 train_12498 agriculture clear primary road
4 train_9320 clear primary
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms)

With the data block API we can rewrite this like that:

planet.ls()
[PosixPath('/home/jupyter/.fastai/data/planet_tiny/labels.csv'),
 PosixPath('/home/jupyter/.fastai/data/planet_tiny/train')]
pd.read_csv(planet/"labels.csv").head()
image_name tags
0 train_31112 clear primary
1 train_4300 partly_cloudy primary water
2 train_39539 clear primary water
3 train_12498 agriculture clear primary road
4 train_9320 clear primary
data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
        #Where to find the data? -> in planet 'train' folder
        .split_by_rand_pct()
        #How to split in train/valid? -> randomly with the default 20% in valid
        .label_from_df(label_delim=' ')
        #How to label? -> use the second column of the csv file and split the tags by ' '
        .transform(planet_tfms, size=128)
        #Data augmentation? -> use tfms with a size of 128
        .databunch())                          
        #Finally -> use the defaults for conversion to databunch
data.show_batch(rows=2, figsize=(9,7))

The data block API also allows you to get your data together in problems for which there is no direct ImageDataBunch factory method. For a segmentation task, for instance, we can use it to quickly get a DataBunch. Let's take the example of the camvid dataset. The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.

camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'

We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)

codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
array(['Animal', 'Archway', 'Bicyclist', 'Bridge', 'Building', 'Car', 'CartLuggagePram', 'Child', 'Column_Pole',
       'Fence', 'LaneMkgsDriv', 'LaneMkgsNonDriv', 'Misc_Text', 'MotorcycleScooter', 'OtherMoving', 'ParkingBlock',
       'Pedestrian', 'Road', 'RoadShoulder', 'Sidewalk', 'SignSymbol', 'Sky', 'SUVPickupTruck', 'TrafficCone',
       'TrafficLight', 'Train', 'Tree', 'Truck_Bus', 'Tunnel', 'VegetationMisc', 'Void', 'Wall'], dtype='<U17')

And we define the following function that infers the mask filename from the image filename.

get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'

Then we can easily define a DataBunch using the data block API. Here we need to use tfm_y=True in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.

data = (SegmentationItemList.from_folder(path_img)
        #Where to find the data? -> in path_img and its subfolders
        .split_by_rand_pct()
        #How to split in train/valid? -> randomly with the default 20% in valid
        .label_from_func(get_y_fn, classes=codes)
        #How to label? -> use the label function on the file name of the data
        .transform(get_transforms(), tfm_y=True, size=128)
        #Data augmentation? -> use tfms with a size of 128, also transform the label images
        .databunch())
        #Finally -> use the defaults for conversion to databunch
data.show_batch(rows=2, figsize=(7,5))

Another example for object detection. We use our tiny sample of the COCO dataset here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.

coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]

The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.

data = (ObjectItemList.from_folder(coco)
        #Where are the images? -> in coco and its subfolders
        .split_by_rand_pct()                          
        #How to split in train/valid? -> randomly with the default 20% in valid
        .label_from_func(get_y_func)
        #How to find the labels? -> use get_y_func on the file name of the data
        .transform(get_transforms(), tfm_y=True)
        #Data augmentation? -> Standard transforms; also transform the label images
        .databunch(bs=16, collate_fn=bb_pad_collate))   
        #Finally we convert to a DataBunch, use a batch size of 16,
        # and we use bb_pad_collate to collate the data into a mini-batch
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))

But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model.

from fastai.text import *
imdb = untar_data(URLs.IMDB_SAMPLE)
data_lm = (TextList
           .from_csv(imdb, 'texts.csv', cols='text')
           #Where are the text? Column 'text' of texts.csv
           .split_by_rand_pct()
           #How to split it? Randomly with the default 20% in valid
           .label_for_lm()
           #Label it for a language model
           .databunch())
           #Finally we convert to a DataBunch
data_lm.show_batch()
idx text
0 ! ! ! xxmaj finally this was directed by the guy who did xxmaj big xxmaj xxunk ? xxmaj must be a replay of xxmaj jonestown - hollywood style . xxmaj xxunk ! xxbos xxmaj this is a extremely well - made film . xxmaj the acting , script and camera - work are all first - rate . xxmaj the music is good , too , though it is
1 , co - billed with xxup the xxup xxunk xxup vampire . a xxmaj spanish - xxmaj italian co - production where a series of women in a village are being murdered around the same time a local count named xxmaj yanos xxmaj xxunk is seen on xxunk , riding off with his ' man - eating ' dog behind him . \n \n xxmaj the xxunk already suspect
2 sad relic that is well worth seeing . xxbos i caught this on the dish last night . i liked the movie . i xxunk to xxmaj russia 3 different times ( xxunk our 2 kids ) . i ca n't put my finger on exactly why i liked this movie other than seeing " bad " turn " good " and " good " turn " semi - bad
3 pushed him along . xxmaj the story ( if it can be called that ) is so full of holes it 's almost funny , xxmaj it never really explains why the hell he survived in the first place , or needs human flesh in order to survive . xxmaj the script is poorly written and the dialogue xxunk on just plane stupid . xxmaj the climax to movie (
4 the xxunk of the xxmaj xxunk xxmaj race and had the xxunk of some of those racist xxunk . xxmaj fortunately , nothing happened like the incident in the movie where the young xxmaj caucasian man went off and started shooting at a xxunk gathering . \n \n i can only hope and pray that nothing like that ever will happen . \n \n xxmaj so is "

For a classification problem, we just have to change the way labeling is done. Here we use the csv column label.

data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text')
                   .split_from_df(col='is_valid')
                   .label_from_df(cols='label')
                   .databunch())
data_clas.show_batch()
text target
xxbos xxmaj raising xxmaj victor xxmaj vargas : a xxmaj review \n \n xxmaj you know , xxmaj raising xxmaj victor xxmaj vargas is like sticking your hands into a big , xxunk bowl of xxunk . xxmaj it 's warm and gooey , but you 're not sure if it feels right . xxmaj try as i might , no matter how warm and gooey xxmaj raising xxmaj negative
xxbos xxup the xxup shop xxup around xxup the xxup corner is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there 's just no getting around that , and it 's hard to actually put one 's feeling for this film into words . xxmaj it 's not one of those films that tries too hard , nor does it come up with positive
xxbos xxmaj now that xxmaj che(2008 ) has finished its relatively short xxmaj australian cinema run ( extremely limited xxunk screen in xxmaj xxunk , after xxunk ) , i can xxunk join both xxunk of " xxmaj at xxmaj the xxmaj movies " in taking xxmaj steven xxmaj soderbergh to task . \n \n xxmaj it 's usually satisfying to watch a film director change his style / negative
xxbos xxmaj this film sat on my xxmaj xxunk for weeks before i watched it . i xxunk a self - indulgent xxunk flick about relationships gone bad . i was wrong ; this was an xxunk xxunk into the screwed - up xxunk of xxmaj new xxmaj xxunk . \n \n xxmaj the format is the same as xxmaj max xxmaj xxunk ' " xxmaj la xxmaj xxunk positive
xxbos xxmaj many neglect that this is n't just a classic due to the fact that it 's the first xxup 3d game , or even the first xxunk - up . xxmaj it 's also one of the first xxunk games , one of the xxunk definitely the first ) truly claustrophobic games , and just a pretty well - xxunk gaming experience in general . xxmaj with graphics positive

Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some PreProcessors that are going to be applied to our data once the splitting and labelling is done.

from fastai.tabular import *
adult = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(adult/'adult.csv')
dep_var = 'salary'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
procs = [FillMissing, Categorify, Normalize]
data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
                           .split_by_idx(valid_idx=range(800,1000))
                           .label_from_df(cols=dep_var)
                           .databunch())
data.show_batch()
workclass education marital-status occupation relationship race sex native-country education-num_na education-num hours-per-week age capital-loss fnlwgt capital-gain target
Private Assoc-acdm Married-civ-spouse Tech-support Husband White Male United-States False 0.7511 -2.4656 -0.3362 4.8553 -0.9396 -0.1459 <50k
Private HS-grad Divorced Other-service Not-in-family White Female United-States False -0.4224 -0.0356 0.7632 -0.2164 -0.0449 -0.1459 <50k
Private Some-college Married-civ-spouse Exec-managerial Husband White Male United-States False -0.0312 -0.0356 0.9098 -0.2164 0.6116 -0.1459 >=50k
Private 9th Divorced Transport-moving Not-in-family White Male United-States False -1.9869 -0.0356 -0.5561 -0.2164 -0.5796 -0.1459 <50k
Private Masters Married-civ-spouse Prof-specialty Husband White Male United-States False 1.5334 0.7743 -0.5561 -0.2164 -0.0140 -0.1459 <50k

Step 1: Provide inputs

The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name ItemList).

class ItemList[source][test]

ItemList(items:Iterator[T_co], path:PathOrStr='.', label_cls:Callable=None, inner_df:Any=None, processor:Union[PreProcessor, Collection[PreProcessor]]=None, x:ItemList=None, ignore_empty:bool=False)

Tests found for ItemList:

Some other tests where ItemList is used:

  • pytest -sv tests/test_data_block.py::test_category [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_non_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_filter_by_folder [source]
  • pytest -sv tests/test_data_block.py::test_multi_category [source]
  • pytest -sv tests/test_data_block.py::test_regression [source]
  • pytest -sv tests/test_data_block.py::test_split_subsets [source]
  • pytest -sv tests/test_data_block.py::test_splitdata_datasets [source]

To run tests please refer to this guide.

A collection of items with __len__ and __getitem__ with ndarray indexing semantics.

This class regroups the inputs for our model in items and saves a path attribute which is where it will look for any files (image files, csv file with labels...). label_cls will be called to create the labels from the result of the label function, inner_df is an underlying dataframe, and processor is to be applied to the inputs after the splitting and labeling.

It has multiple subclasses depending on the type of data you're handling. Here is a quick list:

Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods

from_folder[source][test]

from_folder(path:PathOrStr, extensions:StrList=None, recurse:bool=True, include:OptStrList=None, processor:Union[PreProcessor, Collection[PreProcessor]]=None, **kwargs) → ItemList

Tests found for from_folder:

Some other tests where from_folder is used:

  • pytest -sv tests/test_data_block.py::test_wrong_order [source]

To run tests please refer to this guide.

Create an ItemList in path from the filenames that have a suffix in extensions. recurse determines if we search subfolders.

path = untar_data(URLs.MNIST_TINY)
path.ls()
[PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/valid'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/labels.csv'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/test'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/cleaned.csv'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/history.csv'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/models'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_tiny/train')]
ImageList.from_folder(path)
ImageList (1428 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_tiny

from_df[source][test]

from_df(df:DataFrame, path:PathOrStr='.', cols:IntsOrStrs=0, processor:Union[PreProcessor, Collection[PreProcessor]]=None, **kwargs) → ItemList

Tests found for from_df:

Some other tests where from_df is used:

  • pytest -sv tests/test_data_block.py::test_category [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_non_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_multi_category [source]
  • pytest -sv tests/test_data_block.py::test_regression [source]

To run tests please refer to this guide.

Create an ItemList in path from the inputs in the cols of df.

path = untar_data(URLs.MNIST_SAMPLE)
path.ls()
[PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/valid'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/labels.csv'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/export.pkl'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/models'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/train'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/trained_model.pkl')]
df = pd.read_csv(path/'labels.csv')
df.head()
name label
0 train/3/7463.png 0
1 train/3/21102.png 0
2 train/3/31559.png 0
3 train/3/46882.png 0
4 train/3/26209.png 0
ImageList.from_df(df, path)
ImageList (14434 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

from_csv[source][test]

from_csv(path:PathOrStr, csv_name:str, cols:IntsOrStrs=0, delimiter:str=None, header:str='infer', processor:Union[PreProcessor, Collection[PreProcessor]]=None, **kwargs) → ItemList

No tests found for from_csv. To contribute a test please refer to this guide and this discussion.

Create an ItemList in path from the inputs in the cols of path/csv_name

path = untar_data(URLs.MNIST_SAMPLE)
path.ls()
[PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/valid'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/labels.csv'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/export.pkl'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/models'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/train'),
 PosixPath('/Users/georgezhang/.fastai/data/mnist_sample/trained_model.pkl')]
ImageList.from_csv(path, 'labels.csv')
ImageList (14434 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

Optional step: filter your data

The factory method may have grabbed too many items. For instance, if you were searching sub folders with the from_folder method, you may have gotten files you don't want. To remove those, you can use one of the following methods.

filter_by_func[source][test]

filter_by_func(func:Callable) → ItemList

No tests found for filter_by_func. To contribute a test please refer to this guide and this discussion.

Only keep elements for which func returns True.

path = untar_data(URLs.MNIST_SAMPLE)
df = pd.read_csv(path/'labels.csv')
df.head()
name label
0 train/3/7463.png 0
1 train/3/21102.png 0
2 train/3/31559.png 0
3 train/3/46882.png 0
4 train/3/26209.png 0

Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you.

Path(df.name[0]).suffix
'.png'
ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png')
ImageList (14434 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

filter_by_folder[source][test]

filter_by_folder(include=None, exclude=None)

Tests found for filter_by_folder:

  • pytest -sv tests/test_data_block.py::test_filter_by_folder [source]

To run tests please refer to this guide.

Only keep filenames in include folder or reject the ones in exclude.

filter_by_rand[source][test]

filter_by_rand(p:float, seed:int=None)

No tests found for filter_by_rand. To contribute a test please refer to this guide and this discussion.

Keep random sample of items with probability p and an optional seed.

path = untar_data(URLs.MNIST_SAMPLE)
ImageList.from_folder(path).filter_by_rand(0.5)
ImageList (7255 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

Contrast the number of items with the list created without the filter.

ImageList.from_folder(path)
ImageList (14434 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

to_text[source][test]

to_text(fn:str)

No tests found for to_text. To contribute a test please refer to this guide and this discussion.

Save self.items to fn in self.path.

path = untar_data(URLs.MNIST_SAMPLE)
pd.read_csv(path/'labels.csv').head()
name label
0 train/3/7463.png 0
1 train/3/21102.png 0
2 train/3/31559.png 0
3 train/3/46882.png 0
4 train/3/26209.png 0
file_name = "item_list.txt"
ImageList.from_folder(path).to_text(file_name)
! cat {path/file_name} | head
valid/7/9294.png
valid/7/1186.png
valid/7/6825.png
valid/7/4767.png
valid/7/6170.png
valid/7/6164.png
valid/7/9257.png
valid/7/4773.png
valid/7/8175.png
valid/7/6158.png
cat: stdout: Broken pipe

use_partial_data[source][test]

use_partial_data(sample_pct:float=0.01, seed:int=None) → ItemList

No tests found for use_partial_data. To contribute a test please refer to this guide and this discussion.

Use only a sample of sample_pctof the full dataset and an optional seed.

path = untar_data(URLs.MNIST_SAMPLE)
ImageList.from_folder(path).use_partial_data(0.5)
ImageList (7217 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

Contrast the number of items with the list created without the filter.

ImageList.from_folder(path)
ImageList (14434 items)
Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28),Image (3, 28, 28)
Path: /Users/georgezhang/.fastai/data/mnist_sample

Writing your own ItemList

First check if you can't easily customize one of the existing subclass by:

  • subclassing an existing one and replacing the get method (or the open method if you're dealing with images)
  • applying a custom processor (see step 4)
  • changing the default label_cls for the label creation
  • adding a default PreProcessor with the _processor class variable

If this isn't the case and you really need to write your own class, there is a full tutorial that explains how to proceed.

analyze_pred[source][test]

analyze_pred(pred:Tensor)

No tests found for analyze_pred. To contribute a test please refer to this guide and this discussion.

Called on pred before reconstruct for additional preprocessing.

get[source][test]

get(i) → Any

No tests found for get. To contribute a test please refer to this guide and this discussion.

Subclass if you want to customize how to create item i from self.items.

new[source][test]

new(items:Iterator[T_co], processor:Union[PreProcessor, Collection[PreProcessor]]=None, **kwargs) → ItemList

No tests found for new. To contribute a test please refer to this guide and this discussion.

Create a new ItemList from items, keeping the same attributes.

You'll never need to subclass this normally, just don't forget to add to self.copy_new the names of the arguments that needs to be copied each time new is called in __init__.

reconstruct[source][test]

reconstruct(t:Tensor, x:Tensor=None)

No tests found for reconstruct. To contribute a test please refer to this guide and this discussion.

Reconstruct one of the underlying item for its data t.

Step 2: Split the data between the training and the validation set

This step is normally straightforward, you just have to pick oe of the following functions depending on what you need.

split_none[source][test]

split_none()

No tests found for split_none. To contribute a test please refer to this guide and this discussion.

Don't split the data and create an empty validation set.

split_by_rand_pct[source][test]

split_by_rand_pct(valid_pct:float=0.2, seed:int=None) → ItemLists

Tests found for split_by_rand_pct:

  • pytest -sv tests/test_data_block.py::test_splitdata_datasets [source]

Some other tests where split_by_rand_pct is used:

  • pytest -sv tests/test_data_block.py::test_regression [source]

To run tests please refer to this guide.

Split the items randomly by putting valid_pct in the validation set, optional seed can be passed.

show_doc(ItemList.split_subsets)

split_subsets[source][test]

split_subsets(train_size:float, valid_size:float, seed=None) → ItemLists

Tests found for split_subsets:

  • pytest -sv tests/test_data_block.py::test_split_subsets [source]

To run tests please refer to this guide.

Split the items into train set with size train_size * n and valid set with size valid_size * n.

This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: split_subsets(train_size=0.08, valid_size=0.2).

split_by_files[source][test]

split_by_files(valid_names:ItemList) → ItemLists

No tests found for split_by_files. To contribute a test please refer to this guide and this discussion.

Split the data by using the names in valid_names for validation.

split_by_fname_file[source][test]

split_by_fname_file(fname:PathOrStr, path:PathOrStr=None) → ItemLists

No tests found for split_by_fname_file. To contribute a test please refer to this guide and this discussion.

Split the data by using the names in fname for the validation set. path will override self.path.

split_by_folder[source][test]

split_by_folder(train:str='train', valid:str='valid') → ItemLists

Tests found for split_by_folder:

Some other tests where split_by_folder is used:

  • pytest -sv tests/test_data_block.py::test_wrong_order [source]

To run tests please refer to this guide.

Split the data depending on the folder (train or valid) in which the filenames are.

split_by_idx[source][test]

split_by_idx(valid_idx:Collection[int]) → ItemLists

Tests found for split_by_idx:

Some other tests where split_by_idx is used:

  • pytest -sv tests/test_data_block.py::test_category [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_non_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_multi_category [source]

To run tests please refer to this guide.

Split the data according to the indexes in valid_idx.

split_by_idxs[source][test]

split_by_idxs(train_idx, valid_idx)

No tests found for split_by_idxs. To contribute a test please refer to this guide and this discussion.

Split the data between train_idx and valid_idx.

split_by_list[source][test]

split_by_list(train, valid)

No tests found for split_by_list. To contribute a test please refer to this guide and this discussion.

Split the data between train and valid.

split_by_valid_func[source][test]

split_by_valid_func(func:Callable) → ItemLists

No tests found for split_by_valid_func. To contribute a test please refer to this guide and this discussion.

Split the data by result of func (which returns True for validation set).

split_from_df[source][test]

split_from_df(col:IntsOrStrs=2)

No tests found for split_from_df. To contribute a test please refer to this guide and this discussion.

Split the data from the col in the dataframe in self.inner_df.

Step 3: Label the inputs

To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a label_cls that will be used to create those labels (the default is the one from your input ItemList, and if there is none, it will go to CategoryList, MultiCategoryList or FloatList depending on the type of the labels). This is implemented in the following function:

get_label_cls[source][test]

get_label_cls(labels, label_cls:Callable=None, label_delim:str=None, **kwargs)

No tests found for get_label_cls. To contribute a test please refer to this guide and this discussion.

Return label_cls or guess one from the first element of labels.

If no label_cls argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass label_cls = FloatList so that learners created from your databunch initialize correctly.

The first example in these docs created labels as follows:

path = untar_data(URLs.MNIST_TINY)
ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train

If you want to save the data necessary to recreate your LabelList (not including saving the actual image/text/etc files), you can use to_df or to_csv:

ll.train.to_csv('tmp.csv')

Or just grab a pd.DataFrame directly:

ll.to_df().head()
x y
0 train/7/8845.png 7
1 train/7/8297.png 7
2 train/7/7945.png 7
3 train/7/8186.png 7
4 train/7/9843.png 7

label_empty[source][test]

label_empty(**kwargs)

No tests found for label_empty. To contribute a test please refer to this guide and this discussion.

Label every item with an EmptyLabel.

label_from_df[source][test]

label_from_df(cols:IntsOrStrs=1, label_cls:Callable=None, **kwargs)

Tests found for label_from_df:

Some other tests where label_from_df is used:

  • pytest -sv tests/test_data_block.py::test_category [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_non_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_multi_category [source]
  • pytest -sv tests/test_data_block.py::test_regression [source]

To run tests please refer to this guide.

Label self.items from the values in cols in self.inner_df.

label_const[source][test]

label_const(const:Any=0, label_cls:Callable=None, **kwargs) → LabelList

Tests found for label_const:

Some other tests where label_const is used:

  • pytest -sv tests/test_data_block.py::test_split_subsets [source]
  • pytest -sv tests/test_data_block.py::test_splitdata_datasets [source]

To run tests please refer to this guide.

Label every item with const.

label_from_folder[source][test]

label_from_folder(label_cls:Callable=None, **kwargs) → LabelList

Tests found for label_from_folder:

  • pytest -sv tests/test_text_data.py::test_filter_classes [source]
  • pytest -sv tests/test_text_data.py::test_from_folder [source]

Some other tests where label_from_folder is used:

  • pytest -sv tests/test_data_block.py::test_wrong_order [source]

To run tests please refer to this guide.

Give a label to each filename depending on its folder.

label_from_func[source][test]

label_from_func(func:Callable, label_cls:Callable=None, **kwargs) → LabelList

No tests found for label_from_func. To contribute a test please refer to this guide and this discussion.

Apply func to every input to get its label.

label_from_re[source][test]

label_from_re(pat:str, full_path:bool=False, label_cls:Callable=None, **kwargs) → LabelList

No tests found for label_from_re. To contribute a test please refer to this guide and this discussion.

Apply the re in pat to determine the label of every filename. If full_path, search in the full name.

class CategoryList[source][test]

CategoryList(items:Iterator[T_co], classes:Collection[T_co]=None, label_delim:str=None, **kwargs) :: CategoryListBase

No tests found for CategoryList. To contribute a test please refer to this guide and this discussion.

Basic ItemList for single classification labels.

ItemList suitable for storing labels in items belonging to classes. If None are passed, classes will be determined by the unique different labels. processor will default to CategoryProcessor.

class MultiCategoryList[source][test]

MultiCategoryList(items:Iterator[T_co], classes:Collection[T_co]=None, label_delim:str=None, one_hot:bool=False, **kwargs) :: CategoryListBase

No tests found for MultiCategoryList. To contribute a test please refer to this guide and this discussion.

Basic ItemList for multi-classification labels.

It will store list of labels in items belonging to classes. If None are passed, classes will be determined by the unique different labels. sep is used to split the content of items in a list of tags.

If one_hot=True, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of classes (as we can't use the different labels).

class FloatList[source][test]

FloatList(items:Iterator[T_co], log:bool=False, classes:Collection[T_co]=None, **kwargs) :: ItemList

No tests found for FloatList. To contribute a test please refer to this guide and this discussion.

ItemList suitable for storing the floats in items for regression. Will add a log if this flag is True.

class EmptyLabelList[source][test]

EmptyLabelList(items:Iterator[T_co], path:PathOrStr='.', label_cls:Callable=None, inner_df:Any=None, processor:Union[PreProcessor, Collection[PreProcessor]]=None, x:ItemList=None, ignore_empty:bool=False) :: ItemList

No tests found for EmptyLabelList. To contribute a test please refer to this guide and this discussion.

Basic ItemList for dummy labels.

Invisible step: preprocessing

This isn't seen here in the API, but if you passed a processor (or a list of them) in your initial ItemList during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the _processor variable of your class of items (this can be a list of PreProcessor classes).

A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.

Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the PreProcessor and applied on the validation set.

This is the generic class for all processors.

class PreProcessor[source][test]

PreProcessor(ds:Collection[T_co]=None)

No tests found for PreProcessor. To contribute a test please refer to this guide and this discussion.

Basic class for a processor that will be applied to items at the end of the data block API.

process_one[source][test]

process_one(item:Any)

Tests found for process_one:

Some other tests where process_one is used:

  • pytest -sv tests/test_data_block.py::test_category_processor_existing_class [source]
  • pytest -sv tests/test_data_block.py::test_category_processor_non_existing_class [source]

To run tests please refer to this guide.

Process one item. This method needs to be written in any subclass.

process[source][test]

process(ds:Collection[T_co])

No tests found for process. To contribute a test please refer to this guide and this discussion.

Process a dataset. This default to apply process_one on every item of ds.

class CategoryProcessor[source][test]

CategoryProcessor(ds:ItemList) :: PreProcessor

No tests found for CategoryProcessor. To contribute a test please refer to this guide and this discussion.

PreProcessor that create classes from ds.items and handle the mapping.

generate_classes[source][test]

generate_classes(items)

No tests found for generate_classes. To contribute a test please refer to this guide and this discussion.

Generate classes from items by taking the sorted unique values.

class MultiCategoryProcessor[source][test]

MultiCategoryProcessor(ds:ItemList, one_hot:bool=False) :: CategoryProcessor

No tests found for MultiCategoryProcessor. To contribute a test please refer to this guide and this discussion.

PreProcessor that create classes from ds.items and handle the mapping.

generate_classes[source][test]

generate_classes(items)

No tests found for generate_classes. To contribute a test please refer to this guide and this discussion.

Generate classes from items by taking the sorted unique values.

Optional steps

Add transforms

Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms.

transform[source][test]

transform(tfms:Optional[Tuple[Union[Callable, Collection[Callable]], Union[Callable, Collection[Callable]]]]=(None, None), **kwargs)

No tests found for transform. To contribute a test please refer to this guide and this discussion.

Set tfms to be applied to the xs of the train and validation set.

This is primary for the vision application. The kwargs arguments are the ones expected by the type of transforms you pass. tfm_y is among them and if set to True, the transforms will be applied to input and target.

For examples see: vision.transforms.

Add a test set

To add a test set, you can use one of the two following methods.

add_test[source][test]

add_test(items:Iterator[T_co], label:Any=None)

No tests found for add_test. To contribute a test please refer to this guide and this discussion.

Add test set containing items with an arbitrary label.

add_test_folder[source][test]

add_test_folder(test_folder:str='test', label:Any=None)

No tests found for add_test_folder. To contribute a test please refer to this guide and this discussion.

Add test set containing items from test_folder and an arbitrary label.

Instead, either the passed label argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai).

In the fastai framework test datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a test dataset with labels, you probably need to use it as a validation set, as in:

data_test = (ImageList.from_folder(path)
        .split_by_folder(train='train', valid='test')
        .label_from_folder()
        ...)

Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:

tfms = []
path = Path('data').resolve()
data = (ImageList.from_folder(path)
        .split_by_pct()
        .label_from_folder()
        .transform(tfms)
        .databunch()
        .normalize() ) 
learn = cnn_learner(data, models.resnet50, metrics=accuracy)
learn.fit_one_cycle(5,1e-2)

# now replace the validation dataset entry with the test dataset as a new validation dataset: 
# everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` 
# (or perhaps you were already using the latter, so simply switch to valid='test')
data_test = (ImageList.from_folder(path)
        .split_by_folder(train='train', valid='test')
        .label_from_folder()
        .transform(tfms)
        .databunch()
        .normalize()
       ) 
learn.validate(data_test.valid_dl)

Of course, your data block can be totally different, this is just an example.

Step 4: convert to a DataBunch

This last step is usually pretty straightforward. You just have to include all the arguments we pass to DataBunch.create (bs, num_workers, collate_fn). The class called to create a DataBunch is set in the _bunch attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you.

databunch[source][test]

databunch(path:PathOrStr=None, bs:int=64, val_bs:int=None, num_workers:int=4, dl_tfms:Optional[Collection[Callable]]=None, device:device=None, collate_fn:Callable='data_collate', no_check:bool=False, **kwargs) → DataBunch

Tests found for databunch:

  • pytest -sv tests/test_vision_data.py::test_vision_datasets [source]

Some other tests where databunch is used:

  • pytest -sv tests/test_data_block.py::test_regression [source]

To run tests please refer to this guide.

Create an DataBunch from self, path will override self.path, kwargs are passed to DataBunch.create.

Inner classes

class LabelList[source][test]

LabelList(x:ItemList, y:ItemList, tfms:Union[Callable, Collection[Callable]]=None, tfm_y:bool=False, **kwargs) :: Dataset

No tests found for LabelList. To contribute a test please refer to this guide and this discussion.

A list of inputs x and labels y with optional tfms.

Optionally apply tfms to y if tfm_y is True.

export[source][test]

export(fn:PathOrStr, **kwargs)

No tests found for export. To contribute a test please refer to this guide and this discussion.

Export the minimal state and save it in fn to load an empty version for inference.

transform_y[source][test]

transform_y(tfms:Union[Callable, Collection[Callable]]=None, **kwargs)

No tests found for transform_y. To contribute a test please refer to this guide and this discussion.

Set tfms to be applied to the targets only.

get_state[source][test]

get_state(**kwargs)

No tests found for get_state. To contribute a test please refer to this guide and this discussion.

Return the minimal state for export.

load_empty[source][test]

load_empty(path:PathOrStr, fn:PathOrStr)

No tests found for load_empty. To contribute a test please refer to this guide and this discussion.

Load the state in fn to create an empty LabelList for inference.

load_state[source][test]

load_state(path:PathOrStr, state:dict) → LabelList

No tests found for load_state. To contribute a test please refer to this guide and this discussion.

Create a LabelList from state.

process[source][test]

process(xp:PreProcessor=None, yp:PreProcessor=None, name:str=None)

No tests found for process. To contribute a test please refer to this guide and this discussion.

Launch the processing on self.x and self.y with xp and yp.

set_item[source][test]

set_item(item)

No tests found for set_item. To contribute a test please refer to this guide and this discussion.

For inference, will briefly replace the dataset with one that only contains item.

to_df[source][test]

to_df()

No tests found for to_df. To contribute a test please refer to this guide and this discussion.

Create pd.DataFrame containing items from self.x and self.y.

to_csv[source][test]

to_csv(dest:str)

No tests found for to_csv. To contribute a test please refer to this guide and this discussion.

Save self.to_df() to a CSV file in self.path/dest.

transform[source][test]

transform(tfms:Union[Callable, Collection[Callable]], tfm_y:bool=None, **kwargs)

No tests found for transform. To contribute a test please refer to this guide and this discussion.

Set the tfms and tfm_y value to be applied to the inputs and targets.

class ItemLists[source][test]

ItemLists(path:PathOrStr, train:ItemList, valid:ItemList)

No tests found for ItemLists. To contribute a test please refer to this guide and this discussion.

An ItemList for each of train and valid (optional test).

label_from_lists[source][test]

label_from_lists(train_labels:Iterator[T_co], valid_labels:Iterator[T_co], label_cls:Callable=None, **kwargs) → LabelList

No tests found for label_from_lists. To contribute a test please refer to this guide and this discussion.

Use the labels in train_labels and valid_labels to label the data. label_cls will overwrite the default.

transform[source][test]

transform(tfms:Optional[Tuple[Union[Callable, Collection[Callable]], Union[Callable, Collection[Callable]]]]=(None, None), **kwargs)

No tests found for transform. To contribute a test please refer to this guide and this discussion.

Set tfms to be applied to the xs of the train and validation set.

transform_y[source][test]

transform_y(tfms:Optional[Tuple[Union[Callable, Collection[Callable]], Union[Callable, Collection[Callable]]]]=(None, None), **kwargs)

No tests found for transform_y. To contribute a test please refer to this guide and this discussion.

Set tfms to be applied to the ys of the train and validation set.

class LabelLists[source][test]

LabelLists(path:PathOrStr, train:ItemList, valid:ItemList) :: ItemLists

No tests found for LabelLists. To contribute a test please refer to this guide and this discussion.

A LabelList for each of train and valid (optional test).

get_processors[source][test]

get_processors()

No tests found for get_processors. To contribute a test please refer to this guide and this discussion.

Read the default class processors if none have been set.

load_empty[source][test]

load_empty(path:PathOrStr, fn:PathOrStr='export.pkl')

No tests found for load_empty. To contribute a test please refer to this guide and this discussion.

Create a LabelLists with empty sets from the serialized file in path/fn.

load_state[source][test]

load_state(path:PathOrStr, state:dict)

No tests found for load_state. To contribute a test please refer to this guide and this discussion.

Create a LabelLists with empty sets from the serialized state.

process[source][test]

process()

No tests found for process. To contribute a test please refer to this guide and this discussion.

Process the inner datasets.

Helper functions

get_files[source][test]

get_files(path:PathOrStr, extensions:StrList=None, recurse:bool=False, include:OptStrList=None) → FilePathList

No tests found for get_files. To contribute a test please refer to this guide and this discussion.

Return list of files in path that have a suffix in extensions; optionally recurse.