recpack.algorithms.loss_functions.top1_max_loss

recpack.algorithms.loss_functions.top1_max_loss(positive_scores: torch.Tensor, negative_scores: torch.Tensor) torch.Tensor

TOP1 Max Loss.

This is a differentiable approximation to the TOP1 loss between the target item and the negative sample with the highest score. It can be defined as:

\[L_{top1-max} = \sum\limits_{j=1}^{N_S} s_j\left(\sigma(r_j - r_i) + \sigma(r_j^2)\right)\]

where \(N_S\) is the number of negative samples, \(r_i\) is the target score and \(r_j\) is the score given to the sampled negative. The TOP1 loss between target score and the maximum sampled score is approximated by computing a softmax distribution over the negative samples and using the softmax values \(s_j\) as weights.

See the 2018 paper “Recurrent Neural Networks with Top-K Gains for Session-based Recommendations” by Hidasi et al. for the motivation behind these changes to the original TOP1 loss.

Parameters
  • positive_scores (torch.Tensor) – Output values assigned to positive samples

  • negative_scores (torch.Tensor) – Output values assigned to negative samples

Returns

Computed Top-1 Max Loss

Return type

torch.Tensor