L2C: Describing Visual Differences Needs Semantic Understanding of Individuals


Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs. Suppose there are two images, I_1 and I_2, and the task is to generate a description W_{1,2} comparing them, existing methods directly model { I_1, I_2 } -> W_{1,2} mapping without the semantic understanding of individuals. In this paper, we introduce a Learning-to-Compare (L2C) model, which learns to understand the semantic structures of these two images and compare them while learning to describe each one. We demonstrate that L2C benefits from a comparison between explicit semantic representations and single-image captions, and generalizes better on the new testing image pairs. It outperforms the baseline on both automatic evaluation and human evaluation for the Birds-to-Words dataset.

Presentation Publication Link


ICB Affiliated Authors

An Yan, Xin Eric Wang, Tsu-Jui Fu, William Yang Wang
Peer-Reviewed Conference Presentation
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics