The data block API

The data block API

The data block API lets you customize the creation of a DataBunch by isolating the underlying parts of that process in separate blocks, mainly:

  1. Where are the inputs and how to create them?
  2. How to split the data into a training and validation sets?
  3. How to label the inputs?
  4. What transforms to apply?
  5. How to add a test set?
  6. How to wrap in dataloaders and create the DataBunch?

Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a DataBunch (batch size, collate function...)

The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized DataBunch for training, validation and testing. The factory methods of the various DataBunch are great for beginners but you can't always make your data fit in the tracks they require.

Mix and match

As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.

Examples of use

Let's begin with our traditional MNIST example.

path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
[PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/labels.csv'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/export.pkl'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/test'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/train'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/history.csv'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/models'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/cleaned.csv'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/valid')]
(path/'train').ls()
[PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/train/3'),
 PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/train/7')]

In vision.data, we create an easy DataBunch suitable for classification by simply typing:

data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)

This is aimed at data that is in folders following an ImageNet style, with the train and valid directories, each containing one subdirectory per class, where all the pictures are. There is also a test directory containing unlabelled pictures. With the data block API, we can group everything together like this:

data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders
        .split_by_folder()              #How to split in train/valid? -> use the folders
        .label_from_folder()            #How to label? -> depending on the folder of the filenames
        .add_test_folder()              #Optionally add a test set (here default name is test)
        .transform(tfms, size=64)       #Data augmentation? -> use tfms with a size of 64
        .databunch())                   #Finally? -> use the defaults for conversion to ImageDataBunch
data.show_batch(3, figsize=(6,6), hide_axis=False)

Let's look at another example from vision.data with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:

planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms)

With the data block API we can rewrite this like that:

data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
        #Where to find the data? -> in planet 'train' folder
        .random_split_by_pct()
        #How to split in train/valid? -> randomly with the default 20% in valid
        .label_from_df(sep=' ')
        #How to label? -> use the csv file
        .transform(planet_tfms, size=128)
        #Data augmentation? -> use tfms with a size of 128
        .databunch())                          
        #Finally -> use the defaults for conversion to databunch
data.show_batch(rows=2, figsize=(9,7))

The data block API also allows you to get your data together in problems for which there is no direct ImageDataBunch factory method. For a segmentation task, for instance, we can use it to quickly get a DataBunch. Let's take the example of the camvid dataset. The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.

camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'

We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)

codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
array(['Animal', 'Archway', 'Bicyclist', 'Bridge', 'Building', 'Car', 'CartLuggagePram', 'Child', 'Column_Pole',
       'Fence', 'LaneMkgsDriv', 'LaneMkgsNonDriv', 'Misc_Text', 'MotorcycleScooter', 'OtherMoving', 'ParkingBlock',
       'Pedestrian', 'Road', 'RoadShoulder', 'Sidewalk', 'SignSymbol', 'Sky', 'SUVPickupTruck', 'TrafficCone',
       'TrafficLight', 'Train', 'Tree', 'Truck_Bus', 'Tunnel', 'VegetationMisc', 'Void', 'Wall'], dtype='<U17')

And we define the following function that infers the mask filename from the image filename.

get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'

Then we can easily define a DataBunch using the data block API. Here we need to use tfm_y=True in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.

data = (SegmentationItemList.from_folder(path_img)
        .random_split_by_pct()
        .label_from_func(get_y_fn, classes=codes)
        .transform(get_transforms(), tfm_y=True, size=128)
        .databunch())
data.show_batch(rows=2, figsize=(7,5))

Another example for object detection. We use our tiny sample of the COCO dataset here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.

coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]

The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.

data = (ObjectItemList.from_folder(coco)
        #Where are the images? -> in coco
        .random_split_by_pct()                          
        #How to split in train/valid? -> randomly with the default 20% in valid
        .label_from_func(get_y_func)
        #How to find the labels? -> use get_y_func
        .transform(get_transforms(), tfm_y=True)
        #Data augmentation? -> Standard transforms with tfm_y=True
        .databunch(bs=16, collate_fn=bb_pad_collate))   
        #Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))

But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model.

imdb = untar_data(URLs.IMDB_SAMPLE)
data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
           #Where are the inputs? Column 'text' of this csv
                   .random_split_by_pct()
           #How to split it? Randomly with the default 20%
                   .label_for_lm()
           #Label it for a language model
                   .databunch())
data_lm.show_batch()
idx text
0 xxbos xxmaj cheech & xxmaj chong 's xxmaj next xxmaj movie ( 1980 ) was the second film to star to xxunk loving duo of xxmaj cheech xxmaj xxunk and xxmaj tommy xxmaj chong . xxmaj the lovable burn out xxunk are now roommates . xxmaj they live in a xxunk building looking for ways to score more smoke and just lay about all day . xxmaj but xxmaj cheech
1 this episode , with a touching performance by xxmaj xxunk xxmaj xxunk as a woman exiled to the xxmaj ice xxmaj age , and xxmaj ian xxmaj xxunk as the xxunk xxmaj librarian . xxmaj somewhat reminiscent of the classic episode xxmaj city xxmaj on xxmaj the xxmaj edge of xxmaj forever , this time travel story is a rich and compelling finale to the series , which xxunk one
2 and previous movies but it xxunk away the old and xxunk with a modern tale of redemption xxunk the xxmaj tommy - xxmaj gun xxunk and xxunk xxunk . xxmaj it can feel a little slow in places , especially if you 're used to masses of gun - play in movies like most modern audiences ( like yours truly ) but sometimes , words can speak xxunk than actions
3 and thinks the armed forces is cool . xxmaj he is then given a crash course in the horrible realities of war . xxmaj the unlikely friendship and bonding between xxmaj bernie and xxmaj christina , each not knowing the fact that they are soldiers on different sides of the war , is played very real without going xxunk with the romance drama stuff . xxmaj same goes for the
4 and forgot that there was supposed to be a plot . \n\n xxmaj perhaps one of the most ridiculous scenes in the movie comes early on , when several villains plant an explosive device in an agents car . xxmaj for some reason , even though the device is clearly stated as being " remote xxunk " the bad guys decide to chase her down on their xxunk as she

For a classification problem, we just have to change the way labelling is done. Here we use the csv column label.

data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text')
                   .split_from_df(col='is_valid')
                   .label_from_df(cols='label')
                   .databunch())
data_clas.show_batch()
text target
xxbos xxmaj raising xxmaj victor xxmaj vargas : a xxmaj review \n\n xxmaj you know , xxmaj raising xxmaj victor xxmaj vargas is like sticking your hands into a big , xxunk bowl of xxunk . xxmaj it 's warm and gooey , but you 're not sure if it feels right . xxmaj try as i might , no matter how warm and gooey xxmaj raising xxmaj victor xxmaj negative
xxbos xxup the xxup shop xxup around xxup the xxup corner is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there 's just no getting around that , and it 's hard to actually put one 's feeling for this film into words . xxmaj it 's not one of those films that tries too hard , nor does it come up with positive
xxbos xxmaj now that xxmaj che(2008 ) has finished its relatively short xxmaj australian cinema run ( extremely limited xxunk screen in xxmaj xxunk , after xxunk ) , i can xxunk join both xxunk of " xxmaj at xxmaj the xxmaj movies " in taking xxmaj steven xxmaj soderbergh to task . \n\n xxmaj it 's usually satisfying to watch a film director change his style / subject , negative
xxbos xxmaj this film sat on my xxmaj xxunk for weeks before i watched it . i xxunk a self - indulgent xxunk flick about relationships gone bad . i was wrong ; this was an xxunk xxunk into the screwed - up xxunk of xxmaj new xxmaj xxunk . \n\n xxmaj the format is the same as xxmaj max xxmaj xxunk ' " xxmaj la xxmaj xxunk , " positive
xxbos xxmaj many neglect that this is n't just a classic due to the fact that it 's the first xxup 3d game , or even the first xxunk - up . xxmaj it 's also one of the first xxunk games , one of the xxunk definitely the first ) truly claustrophobic games , and just a pretty well - xxunk gaming experience in general . xxmaj with graphics positive

Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some PreProcessors that are going to be applied to our data once the splitting and labelling is done.

adult = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(adult/'adult.csv')
dep_var = 'salary'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
procs = [FillMissing, Categorify, Normalize]
data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
                           .split_by_idx(valid_idx=range(800,1000))
                           .label_from_df(cols=dep_var)
                           .databunch())
data.show_batch()
workclass education marital-status occupation relationship race sex native-country education-num_na education-num hours-per-week age capital-loss fnlwgt capital-gain target
Private HS-grad Never-married Craft-repair Unmarried Asian-Pac-Islander Male Vietnam False -0.4224 -0.0356 -0.6294 -0.2164 0.7476 -0.1459 <50k
Private 9th Married-civ-spouse Farming-fishing Wife White Female United-States False -1.9869 0.1264 -0.5561 -0.2164 1.9847 -0.1459 <50k
Private Some-college Married-civ-spouse Transport-moving Husband White Male United-States False -0.0312 -0.0356 0.3968 -0.2164 0.1973 -0.1459 <50k
Self-emp-not-inc Bachelors Married-civ-spouse Prof-specialty Husband White Male United-States False 1.1422 -0.0356 1.7894 -0.2164 -0.6119 -0.1459 >=50k
? HS-grad Never-married ? Own-child Other Female United-States False -0.4224 -0.0356 -1.5090 -0.2164 1.8018 -0.1459 <50k

Step 1: Provide inputs

The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name ItemList).

class ItemList[source]

ItemList(`items`:Iterator[T_co], `path`:PathOrStr=`'.'`, `label_cls`:Callable=`None`, `xtra`:Any=`None`, `processor`:PreProcessor=`None`, `x`:ItemList=`None`, `ignore_empty`:bool=`False`)

A collection of items with __len__ and __getitem__ with ndarray indexing semantics.

This class regroups the inputs for our model in items and saves a path attribute which is where it will look for any files (image files, csv file with labels...). create_func is applied to items to get the final output. label_cls will be called to create the labels from the result of the label function, xtra contains additional information (usually an underlying dataframe) and processor is to be applied to the inputs after the splitting and labelling.

It has multiple subclasses depending on the type of data you're handling. Here is a quick list:

Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods

from_folder[source]

from_folder(`path`:PathOrStr, `extensions`:StrList=`None`, `recurse`=`True`, `include`:OptStrList=`None`, `kwargs`) → ItemList

Create an ItemList in path from the filenames that have a suffix in extensions. recurse determines if we search subfolders.

from_df[source]

from_df(`df`:DataFrame, `path`:PathOrStr=`'.'`, `cols`:IntsOrStrs=`0`, `kwargs`) → ItemList

Create an ItemList in path from the inputs in the cols of df.

from_csv[source]

from_csv(`path`:PathOrStr, `csv_name`:str, `cols`:IntsOrStrs=`0`, `header`:str=`'infer'`, `kwargs`) → ItemList

Create an ItemList in path from the inputs in the cols of path/csv_name opened with header.

Optional step: filter your data

The factory method may have grabbed too many items. For instance, if you were searching sub folders with the from_folder method, you may have gotten files you don't want. To remove those, you can use one of the following methods.

filter_by_func[source]

filter_by_func(`func`:Callable) → ItemList

Only keep elements for which func returns True.

filter_by_folder[source]

filter_by_folder(`include`=`None`, `exclude`=`None`)

Only keep filenames in include folder or reject the ones in exclude.

filter_by_rand[source]

filter_by_rand(`p`:float, `seed`:int=`None`)

Keep random sample of items with probability p and an optional seed.

to_text[source]

to_text(`fn`:str)

Save self.items to fn in self.path.

use_partial_data[source]

use_partial_data(`sample_pct`:float=`1.0`, `seed`:int=`None`) → ItemList

Use only a sample of sample_pctof the full dataset and an optional seed.

Writing your own ItemList

First check if you can't easily customize one of the existing subclass by:

  • subclassing an existing one and replacing the get method (or the open method if you're dealing with images)
  • applying a custom processor (see step 4)
  • changing the default label_cls for the label creation
  • adding a default PreProcessor with the _processor class variable

If this isn't the case and you really need to write your own class, there is a full tutorial that explains how to proceed.

analyze_pred[source]

analyze_pred(`pred`:Tensor)

Called on pred before reconstruct for additional preprocessing.

get[source]

get(`i`) → Any

Subclass if you want to customize how to create item i from self.items.

new[source]

new(`items`:Iterator[T_co], `processor`:PreProcessor=`None`, `kwargs`) → ItemList

Create a new ItemList from items, keeping the same attributes.

You'll never need to subclass this normally, just don't forget to add to self.copy_new the names of the arguments that needs to be copied each time new is called in __init__.

reconstruct[source]

reconstruct(`t`:Tensor, `x`:Tensor=`None`)

Reconstuct one of the underlying item for its data t.

Step 2: Split the data between the training and the validation set

This step is normally straightforward, you just have to pick oe of the following functions depending on what you need.

no_split[source]

no_split()

Don't split the data and create an empty validation set.

random_split_by_pct[source]

random_split_by_pct(`valid_pct`:float=`0.2`, `seed`:int=`None`) → ItemLists

Split the items randomly by putting valid_pct in the validation set, optional seed can be passed.

split_by_files[source]

split_by_files(`valid_names`:ItemList) → ItemLists

Split the data by using the names in valid_names for validation.

split_by_fname_file[source]

split_by_fname_file(`fname`:PathOrStr, `path`:PathOrStr=`None`) → ItemLists

Split the data by using the names in fname for the validation set. path will override self.path.

split_by_folder[source]

split_by_folder(`train`:str=`'train'`, `valid`:str=`'valid'`) → ItemLists

Split the data depending on the folder (train or valid) in which the filenames are.

split_by_idx[source]

split_by_idx(`valid_idx`:Collection[int]) → ItemLists

Split the data according to the indexes in valid_idx.

split_by_idxs[source]

split_by_idxs(`train_idx`, `valid_idx`)

Split the data between train_idx and valid_idx.

split_by_list[source]

split_by_list(`train`, `valid`)

Split the data between train and valid.

split_by_valid_func[source]

split_by_valid_func(`func`:Callable) → ItemLists

Split the data by result of func (which returns True for validation set).

split_from_df[source]

split_from_df(`col`:IntsOrStrs=`2`)

Split the data from the col in the dataframe in self.xtra.

Step 3: Label the inputs

To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a label_cls that will be used to create those labels (the default is the one from your input ItemList, and if there is none, it will go to CategoryList, MultiCategoryList or FloatList depending on the type of the labels). This is implemented in the following function:

get_label_cls[source]

get_label_cls(`labels`, `label_cls`:Callable=`None`, `sep`:str=`None`, `kwargs`)

Return label_cls or guess one from the first element of labels.

The first example in these docs created labels as follows:

path = untar_data(URLs.MNIST_TINY)
ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train

If you want to save the data necessary to recreate your LabelList (not including saving the actual image/text/etc files), you can use to_df or to_csv:

ll.train.to_csv('tmp.csv')

Or just grab a pd.DataFrame directly:

ll.to_df().head()
x y
0 train/3/9932.png 3
1 train/3/7189.png 3
2 train/3/8498.png 3
3 train/3/8888.png 3
4 train/3/9004.png 3

label_empty[source]

label_empty()

Label every item with an EmptyLabel.

label_from_list[source]

label_from_list(`labels`:Iterator[T_co], `kwargs`) → LabelList

Label self.items with labels.

label_from_df[source]

label_from_df(`cols`:IntsOrStrs=`1`, `kwargs`)

Label self.items from the values in cols in self.xtra.

label_const[source]

label_const(`const`:Any=`0`, `kwargs`) → LabelList

Label every item with const.

label_from_folder[source]

label_from_folder(`kwargs`) → LabelList

Give a label to each filename depending on its folder.

label_from_func[source]

label_from_func(`func`:Callable, `kwargs`) → LabelList

Apply func to every input to get its label.

label_from_re[source]

label_from_re(`pat`:str, `full_path`:bool=`False`, `kwargs`) → LabelList

Apply the re in pat to determine the label of every filename. If full_path, search in the full name.

class CategoryList[source]

CategoryList(`items`:Iterator[T_co], `classes`:Collection[T_co]=`None`, `sep`:str=`None`, `kwargs`) :: CategoryListBase

Basic ItemList for single classification labels.

ItemList suitable for storing labels in items belonging to classes. If None are passed, classes will be determined by the unique different labels. processor will default to CategoryProcessor.

class MultiCategoryList[source]

MultiCategoryList(`items`:Iterator[T_co], `classes`:Collection[T_co]=`None`, `sep`:str=`None`, `one_hot`:bool=`False`, `kwargs`) :: CategoryListBase

Basic ItemList for multi-classification labels.

It will store list of labels in items belonging to classes. If None are passed, classes will be determined by the unique different labels. sep is used to split the content of items in a list of tags.

If one_hot=True, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of classes (as we can't use the different labels).

class FloatList[source]

FloatList(`items`:Iterator[T_co], `log`:bool=`False`, `kwargs`) :: ItemList

ItemList suitable for storing the floats in items for regression. Will add a log if thif flag is True.

class EmptyLabelList[source]

EmptyLabelList(`items`:Iterator[T_co], `path`:PathOrStr=`'.'`, `label_cls`:Callable=`None`, `xtra`:Any=`None`, `processor`:PreProcessor=`None`, `x`:ItemList=`None`, `ignore_empty`:bool=`False`) :: ItemList

Basic ItemList for dummy labels.

Invisible step: preprocessing

This isn't seen here in the API, but if you passed a processor (or a list of them) in your initial ItemList during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the _processor variable of your class of items (this can be a list of PreProcessor classes).

A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.

Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the PreProcessor and applied on the validation set.

This is the generic class for all processors.

class PreProcessor[source]

PreProcessor(`ds`:Collection[T_co]=`None`)

Basic class for a processor that will be applied to items at the end of the data block API.

process_one[source]

process_one(`item`:Any)

Process one item. This method needs to be written in any subclass.

process[source]

process(`ds`:Collection[T_co])

Process a dataset. This default to apply process_one on every item of ds.

class CategoryProcessor[source]

CategoryProcessor(`ds`:ItemList) :: PreProcessor

PreProcessor that create classes from ds.items and handle the mapping.

generate_classes[source]

generate_classes(`items`)

Generate classes from items by taking the sorted unique values.

class MultiCategoryProcessor[source]

MultiCategoryProcessor(`ds`:ItemList, `one_hot`:bool=`False`) :: CategoryProcessor

PreProcessor that create classes from ds.items and handle the mapping.

generate_classes[source]

generate_classes(`items`)

Generate classes from items by taking the sorted unique values.

Optional steps

Add transforms

Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms.

transform[source]

transform(`tfms`:Optional[Tuple[Union[Callable, Collection[Callable]], Union[Callable, Collection[Callable]]]]=`(None, None)`, `kwargs`)

Set tfms to be applied to the xs of the train and validation set.

This is primary for the vision application. The kwargs are the one expected by the type of transforms you pass. tfm_y is among them and if set to True, the transforms will be applied to input and target.

Add a test set

To add a test set, you can use one of the two following methods.

add_test[source]

add_test(`items`:Iterator[T_co], `label`:Any=`None`)

Add test set containing items with an arbitrary label.

add_test_folder[source]

add_test_folder(`test_folder`:str=`'test'`, `label`:Any=`None`)

Add test set containing items from test_folder and an arbitrary label.

Important! No labels will be collected if available. Instead, either the passed label argument or a first label from train_ds will be used for all entries of this dataset.

In the fastai framework test datasets have no labels - this is the unknown data to be predicted.

If you want to use a test dataset with labels, you probably need to use it as a validation set, as in:

data_test = (ImageItemList.from_folder(path)
        .split_by_folder(train='train', valid='test')
        .label_from_folder()
        ...)

Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:

tfms = []
path = Path('data').resolve()
data = (ImageItemList.from_folder(path)
        .split_by_pct()
        .label_from_folder()
        .transform(tfms)
        .databunch()
        .normalize() ) 
learn = create_cnn(data, models.resnet50, metrics=accuracy)
learn.fit_one_cycle(5,1e-2)

# now replace the validation dataset entry with the test dataset as a new validation dataset: 
# everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` 
# (or perhaps you were already using the latter, so simply switch to valid='test')
data_test = (ImageItemList.from_folder(path)
        .split_by_folder(train='train', valid='test')
        .label_from_folder()
        .transform(tfms)
        .databunch()
        .normalize()
       ) 
learn.data = data_test
learn.validate()

Of course, your data block can be totally different, this is just an example.

Step 4: convert to a DataBunch

This last step is usually pretty straightforward. You just have to include all the arguments we pass to DataBunch.create (bs, num_workers, collate_fn). The class called to create a DataBunch is set in the _bunch attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you.

databunch[source]

databunch(`path`:PathOrStr=`None`, `kwargs`) → ImageDataBunch

Create an DataBunch from self, path will override self.path, kwargs are passed to DataBunch.create.

Inner classes

class LabelList[source]

LabelList(`x`:ItemList, `y`:ItemList, `tfms`:Union[Callable, Collection[Callable]]=`None`, `tfm_y`:bool=`False`, `kwargs`) :: Dataset

A list of inputs x and labels y with optional tfms.

Optionally apply tfms to y if tfm_y is True.

export[source]

export(`fn`:PathOrStr, `kwargs`)

Export the minimal state and save it in fn to load an empty version for inference.

transform_y[source]

transform_y(`tfms`:Union[Callable, Collection[Callable]]=`None`, `kwargs`)

Set tfms to be applied to the targets only.

load_empty[source]

load_empty(`fn`:PathOrStr)

Load the sate in fn to create an empty LabelList for inference.

process[source]

process(`xp`:PreProcessor=`None`, `yp`:PreProcessor=`None`, `name`:str=`None`)

Launch the processing on self.x and self.y with xp and yp.

set_item[source]

set_item(`item`)

For inference, will briefly replace the dataset with one that only contains item.

to_df[source]

to_df()

Create pd.DataFrame containing items from self.x and self.y.

to_csv[source]

to_csv(`dest`:str)

Save self.to_df() to a CSV file in self.path/dest.

transform[source]

transform(`tfms`:Union[Callable, Collection[Callable]], `tfm_y`:bool=`None`, `kwargs`)

Set the tfms and tfm_y value to be applied to the inputs and targets.

class ItemLists[source]

ItemLists(`path`:PathOrStr, `train`:ItemList, `valid`:ItemList, `test`:ItemList=`None`)

An ItemList for each of train and valid (optional test).

label_from_lists[source]

label_from_lists(`train_labels`:Iterator[T_co], `valid_labels`:Iterator[T_co], `label_cls`:Callable=`None`, `kwargs`) → LabelList

Use the labels in train_labels and valid_labels to label the data. label_cls will overwrite the default.

transform[source]

transform(`tfms`:Optional[Tuple[Union[Callable, Collection[Callable]], Union[Callable, Collection[Callable]]]]=`(None, None)`, `kwargs`)

Set tfms to be applied to the xs of the train and validation set.

transform_y[source]

transform_y(`tfms`:Optional[Tuple[Union[Callable, Collection[Callable]], Union[Callable, Collection[Callable]]]]=`(None, None)`, `kwargs`)

Set tfms to be applied to the ys of the train and validation set.

class LabelLists[source]

LabelLists(`path`:PathOrStr, `train`:ItemList, `valid`:ItemList, `test`:ItemList=`None`) :: ItemLists

A LabelList for each of train and valid (optional test).

get_processors[source]

get_processors()

Read the default class processors if none have been set.

load_empty[source]

load_empty(`path`:PathOrStr, `fn`:PathOrStr=`'export.pkl'`)

Create a LabelLists with empty sets from the serialzed file in path/fn.

process[source]

process()

Process the inner datasets.

Helper functions

get_files[source]

get_files(`path`:PathOrStr, `extensions`:StrList=`None`, `recurse`:bool=`False`, `include`:OptStrList=`None`) → FilePathList

Return list of files in path that have a suffix in extensions; optionally recurse.