2D Box
License: Unknown


Objects365 is a brand new dataset, designed to spur object detection research with a focus on diverse objects in the Wild.

  • 365 categories
  • 2 million images
  • 30 million bounding boxes

Data collection

Data Source

To make the image sources more diverse, we collect images mainly from Flicker.

Object Categories

Based on the collected images, we first select eleven super- categories which are common and diverse to cover most ob- ject instances. They are: human and related accessories, living room, clothes, kitchen, instrument, transportation, bathroom, electronics, food (vegetables), office supplies, and animal. Based on the super-categories, we further pro- pose 442 categories which widely exists in our daily lives. As some of the object categories are rarely found, we first annotate all 442 categories in the first 100K images and then select the most frequent 365 object categories as our tar- get objects. Also, to be compatible with the existing object detection benchmarks, the 365 categories include the cate- gories defined in PASCAL VOC and COCO bench- marks.

Non-Iconic Images

As our Objects365 dataset focuses on object detection, we eliminate those images which are only suitable for image classification. For example, the image only contains one object instance around the image center. This filtering process was first adopted in COCO.

Data Annotation

We design our annotation pipeline as the following three steps. The first step performs a two-class classification. If the image is non-iconic or contains at least one object instance in the eleven super-categories, it will be passed to the next step. In the second step, the image-level tags with the eleven super-categories will be labeled. An image may be labeled with more than one tag. In the third step, one annotator will be assigned to label the object instances in one specific super-category. All object instances belonging to the super-category should be labeled with a bounding box together with an object name.


Please use the following citation when referencing the dataset:

  title={Objects365: A large-scale, high-quality dataset for object detection},
  author={Shao, Shuai and Li, Zeming and Zhang, Tianyuan and Peng, Chao and Yu, Gang and Zhang,
Xiangyu and Li, Jing and Sun, Jian},
  booktitle={Proceedings of the IEEE international conference on computer vision},
Data Summary
Provided by
Megvii Technology Ltd.
Kuangshi is the world's leading artificial intelligence products and solutions company.We are currently focusing on areas where algorithms can create great value: the Internet of people, the Internet of things in cities, and the Internet of things in the supply chain.
Start Building AI Now