Options
Cross-modal Retrieval Using Contrastive Learning of Visual-Semantic Embeddings
ISSN
10514651
Date Issued
2022-01-01
Author(s)
Jain, Anurag
Verma, Yashaswi
DOI
10.1109/ICPR56361.2022.9956317
Abstract
Contrastive learning is a powerful technique to learn representations that are semantically distinctive and geometrically invariant. While most of the earlier approaches have demonstrated its effectiveness on single-modality learning tasks such as image classification, recently there have been a few attempts towards extending this idea to multi-modal data. In this paper, we propose two loss functions based on normalized cross-entropy to perform the task of learning joint visual-semantic embedding using batch contrastive training. In a batch, for a given anchor point from one modality, we consider its negatives only from another modality, and define our first contrastive loss based on the expected violations incurred by all the negatives. Next, we update this loss and define the second contrastive loss based on the violation incurred only by the hardest negative. We compare our results with existing visual-semantic embedding methods on cross-modal image-to-text and text-to-image retrieval tasks using the MS-COCO and Flickr30K datasets, where we achieve competitive results and are outperformed only by adaptations of the n-pairs symmetric angular loss for multi-modal data. We have also shared our code and pre-trained models for reproducibility.