Modeling Context in Referring Expressions
Abstract
Incorporating visual comparisons and joint language generation for objects within images improves performance in referring expression tasks.
Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring expression generation and comprehension.
Models citing this paper 165
Browse 165 models citing this paperDatasets citing this paper 1
Spaces citing this paper 73
Collections including this paper 0
No Collection including this paper