Algorithm assessment metrics.
Recall@k¶
Recall@k is a standard information retrieval metric. For example, suppose that we computed recall@10 equal to 40% in our top-10 recommendation system. This means that 40% of the total number of the relevant items appear in the top-k results.
More formally we have:
$$Recall@k = \frac{\text{number of recommended items @k that are relevant}}{\text{total number of relevant items}}$$Arguments:
- predictions (list[int]): The list of recommended items
- targets (list[int]): The list of relevant items
- k (int): The number up to where the recall is computed - default: 10
predictions = (5, 6, 32, 67, 1, 15, 7, 89, 10, 43)
targets = (15, 5, 44, 35, 67, 101, 7, 80, 43, 12)
assert recall_at_k(predictions, targets, 5) == .2, 'Recall@k should be equal to 2/10 = 0.2'
Precision@k¶
Precision@k is a standard information retrieval metric. For example, an interpretation of precision@k computed at 80% could be that that 80% of the total number of the recommendations made are relevant to the user.
More formally we have:
$$Precision@k = \frac{\text{number of recommended items @k that are relevant}}{\text{number of recommended items @k}}$$Arguments:
- predictions (list[int]): The list of recommended items
- targets (list[int]): The list of relevant items
- k (int): The number up to where the recall is computed - default: 10
predictions = (5, 6, 32, 67, 1, 15, 7, 89, 10, 43)
targets = (15, 5, 44, 35, 67, 101, 7, 80, 43, 12)
assert precision_at_k(predictions, targets, 5) == .4, 'Recall@k should be equal to 2/5 = 0.4'