CLEVR
Text
Common
|...
License: CC BY 4.0

Overview

When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.

Citation

@inproceedings{johnson2017clevr,
  title={Clevr: A diagnostic dataset for compositional language and elementary visual reasoning},
  author={Johnson, Justin and Hariharan, Bharath and van der Maaten, Laurens and Fei-Fei, Li
and Lawrence Zitnick, C and Girshick, Ross},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2901--2910},
  year={2017}
}

License

CC BY 4.0

Data Summary
Type
Image, Text,
Amount
--
Size
--
Provided by
Justin Johnson
I am an Assistant Professor at the University of Michigan and a Visiting Scientist at Facebook AI Research. I'm broadly interested in computer vision and machine learning. My research involves visual reasoning, vision and language, image generation, and 3D reasoning using deep neural networks. I received my PhD from Stanford University, advised by Fei-Fei Li.
Issue
openDataset.similar_dataset
Open Images
Common
Start Building AI Now