This package contains all the necessary functions to quickly train a model for a collaborative filtering task. Let's start by importing all we'll need.
from fastai import * from fastai.collab import *
Collaborative filtering is when you're tasked to predict how much a user is going to like a certain item. The fastai library contains a
CollabFilteringDataset class that will help you create datasets suitable for training, and a function
get_colab_learner to build a simple model directly from a ratings table. Let's first see how we can get started before devling in the documentation.
For our example, we'll use a small subset of the MovieLens dataset. In there, we have to predict the rating a user gave a given movie (from 0 to 5). It comes in the form of a csv file where each line is the rating of a movie by a given person.
path = untar_data(URLs.ML_SAMPLE) ratings = pd.read_csv(path/'ratings.csv') ratings.head()
We'll first turn the
movieId columns in category codes, so that we can replace them with their codes when it's time to feed them to an
Embedding layer. This step would be even more important if our csv had names of users, or names of items in it.
learn = get_collab_learner(ratings, n_factors=50, pct_val=0.2, min_score=0., max_score=5.)
And the immediately begin training
learn.fit_one_cycle(5, 5e-3, wd=0.1)
Total time: 00:04 epoch train loss valid loss 1 2.368736 1.849535 (00:00) 2 1.080932 0.691473 (00:00) 3 0.740156 0.669135 (00:00) 4 0.629487 0.658641 (00:00) 5 0.599293 0.654870 (00:00)
This is the basic class to buil a
Dataset suitable for colaborative filtering.
item should be categorical series that will be replaced with their codes internally and have the corresponding
ratings. One of the factory methods will prepare the data in this format.
rating_df and splits it randomly for train and test following
pct_val (unless it's None).
rating_name give the names of the corresponding columns (defaults to the first, the second and the third column).
Opens the file in
csv_name as a
DataFrame and feeds it to
show_doc.from_df with the
Creates a simple model with
Embedding weights and biases for
n_factors latent factors. Takes the dot product of the embeddings and adds the bias, then feed the result to a sigmoid rescaled to go from
Learner object built from the data in
CollabFilteringDataset. Optionally, creates another
kwargs are fed to
DataBunch.create with these datasets. The model is given by
max_score (the numbers of users and items will be inferred from the data).