Papers
arxiv:1910.04867

A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark

Published on Oct 1, 2019
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

The Visual Task Adaptation Benchmark evaluates visual representations by their ability to adapt to diverse, unseen tasks with minimal data, providing insights into the effectiveness and generalizability of different representation learning methods.

AI-generated summary

Representation learning promises to unlock deep learning for the long tail of vision tasks without expensive labelled datasets. Yet, the absence of a unified evaluation for general visual representations hinders progress. Popular protocols are often too constrained (linear classification), limited in diversity (ImageNet, CIFAR, Pascal-VOC), or only weakly related to representation quality (ELBO, reconstruction error). We present the Visual Task Adaptation Benchmark (VTAB), which defines good representations as those that adapt to diverse, unseen tasks with few examples. With VTAB, we conduct a large-scale study of many popular publicly-available representation learning algorithms. We carefully control confounders such as architecture and tuning budget. We address questions like: How effective are ImageNet representations beyond standard natural datasets? How do representations trained via generative and discriminative models compare? To what extent can self-supervision replace labels? And, how close are we to general visual representations?

Community

Sign up or log in to comment

Models citing this paper 35

Browse 35 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 530

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.