Contrastive Learning of Preferences with a Contextual InfoNCE Loss

Timo Bertram, Johannes Fürnkranz, Martin Müller·July 08, 2024

Summary

The paper "Contrastive Learning of Preferences with a Contextual InfoNCE Loss" introduces an adaptation of the Contrastive Learning framework (CLIP) to address the challenge of contextual preference ranking in scenarios like collectable card games. The authors modify the InfoNCE loss to handle multiple pairwise preferences in a single comparison, crucial for dealing with items having multiple positive associations within the same batch. This adaptation is demonstrated to outperform previous work trained with the triplet loss, while alleviating problems associated with mining triplets. The paper focuses on the domain of collectable card games, specifically deckbuilding, aiming to learn an embedding space that captures the associations between single cards and card pools based on human selections. The adapted method uses Siamese Neural Networks (SNNs) and the InfoNCE loss to predict human player decisions in a context-specific framework, enabling the comparison of an arbitrary amount of items and ranking them in a given context. The experiments show that the proposed method effectively models card preferences, outperforming previous research using the triplet loss and an unaltered version of CLIP. The research highlights the importance of contextual preferences in various domains and sets a baseline for future work in this area.

Key findings

2

Tables

1

Advanced features