Papers
arxiv:1608.00272

Modeling Context in Referring Expressions

Published on Jul 31, 2016
Authors:
,
,
,
,

Abstract

Incorporating visual comparisons and joint language generation for objects within images improves performance in referring expression tasks.

AI-generated summary

Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring expression generation and comprehension.

Community

Sign up or log in to comment

Models citing this paper 165

Browse 165 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 73

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.