Database Reference
In-Depth Information
Evaluating the performance of
recommendation models
How do we know whether the model we have trained is a good model? We need to be able
to evaluate its predictive performance in some way. Evaluation metrics are measures of a
model's predictive capability or accuracy. Some are direct measures of how well a model
predicts the model's target variable (such as Mean Squared Error), while others are con-
cerned with how well the model performs at predicting things that might not be directly op-
timized in the model but are often closer to what we care about in the real world (such as
Mean average precision).
Evaluation metrics provide a standardized way of comparing the performance of the same
model with different parameter settings and of comparing performance across different
models. Using these metrics, we can perform model selection to choose the best-perform-
ing model from the set of models we wish to evaluate.
Here, we will show you how to calculate two common evaluation metrics used in recom-
mender systems and collaborative filtering models: Mean Squared Error and Mean average
precision at K.
Search WWH ::




Custom Search