graviti
Products
Resources
About us
OpenLORIS-2019
Classification
Robot
|...
License: BSD-4-Clause

Overview

we provide a new lifelong robotic vision dataset (“OpenLORIS-Object”) collected via RGB-D cameras
mounted on mobile robots. The dataset embeds the challenges faced by a robot in the real-life
application and provides new benchmarks for validating lifelong object recognition algorithms.

Data Collection

Data is obtained in the office and home environments. Several grounded robots mounted by Intel
RealSense D435i and T265 camera is used for data collection. The D435i camera provides aligned
color images and depth images, and IMU measurements. The T265 camera provides stereo fisheye
images and aligned IMU measurements. In the 1st released dataset, we included the RGB and Depth
images.

The robot is actively recording the videos of targeted objects under multiple illuminations,
occlusions, camera-object distances/angles, and context information (clutters). We do include
the common challenges that the robot is usually faced with. For example,

  • Illumination. In
    a real-world application, the illumination can vary significantly across time, e.g., day and
    night differences. We repeat the data collection under weak, normal, and strong lighting conditions,
    respectively. The task becomes challenging with lights to be very weak.
  • Occlusion. Occlusion
    happens when a part of an object is hidden by other objects, or only a portion of the object
    is visible in the field of view. Since distinctive characteristics of the object might be hidden,
    occlusion significantly increases the difficulty for recognition.
  • Object size. Small-size
    or elongated objects make the task challenging, like dry batteries or glue sticks.
  • Camera-object
    angles/distances. The angles of the cameras affect the attributes detected from the object.
  • Clutter. Clutter refers to the presence of other objects in the vicinity of the considered
    object. The simultaneous presence of multiple objects may interfere with the classification
    task.

Data Format

Our released dataset is a collection of 69 instances including 19 categories daily necessities
objects under 7 scenes. For each instance, a 17 seconds video (at 30 fps) has been recorded
with a Realsense D435i camera delivering ~ 500 RGB-D frames. 4 environment factors, each has
3 level changes, are considered explicitly in our dataset, including illumination, occlusion,
clutter and actual pixel sizes of the object in the images. The defined three difficulty levels
for each factor are shown below (thus, totally we have 12 subcategories w.r.t. the environment
factors):

Level Illunimation Occlusion(percentage) Object Pixel Size (pixels) Clutter
1 Strong 0% >200×200 Simple
2 Normal 25% 30×30−200×200 Normal
3 Weak 50% <30×30 Complex

For each instance in each level, we provided 260 samples. That is, for each instance,
there are 3120 samples. The total images provided is around:

260 (samples/intance)× 69 (instance) × 4 (factors/level) × 3 (difficulty levels) = 215,280

Citation

The data provided here is the 1st version for evaluating Lifelong Object Recognition algorithms.
Please cite our paper below in any academic work done with this dataset.

Qi She et al. "OpenLORIS-Object:
A Dataset and Benchmark towards Lifelong Object Recognition". arXiv:1911.06487, 2019
Qi She
et al., "IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]," in
IEEE Robotics & Automation Magazine, vol. 27, no. 2, pp. 11-16, June 2020, doi: 10.1109/MRA.2020.2987186.


@inproceedings{she2019openlorisobject,
    title={ {OpenLORIS-Object}: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning},
    author={Qi She and Fan Feng and Xinyue Hao and Qihan Yang and Chuanlin Lan and Vincenzo
Lomonaco and Xuesong Shi and Zhengwei Wang and Yao Guo and Yimin Zhang and Fei Qiao and Rosa
H. M. Chan},
    booktitle={2020 International Conference on Robotics and Automation (ICRA)},
    year={2020},
    pages={4767-4773},
}
}


@article{9113359,
title={IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]},
author={H. {Bae} and E. {Brophy} and R. H. M. {Chan} and B. {Chen} and F. {Feng}
and G. {Graffieti} and V. {Goel} and X. {Hao} and H. {Han} and S. {Kanagarajah} and S. {Kumar}
and S. {Lam} and T. L. {Lam} and C. {Lan} and Q. {Liu} and V. {Lomonaco} and L. {Ma} and D.
{Maltoni} and G. I. {Parisi} and L. {Pellegrini} and D. {Piyasena} and S. {Pu} and Q. {She}
and D. {Sheet} and S. {Song} and Y. {Son} and Z. {Wang} and T. E. {Ward} and J. {Wu} and M.
{Wu} and D. {Xie} and Y. {Xu} and L. {Yang} and Q. {Yang} and Q. {Zhong} and L. {Zhou}},
 journal={IEEE Robotics  Automation Magazine},
year={2020},
volume={27},
number={2},
pages={11-16},}

License

BSD-4-Clause

Data Summary
Type
Image,
Amount
215.28K
Size
14.1GB
Provided by
Qi She et al.
| Amount 215.28K | Size 14.1GB
OpenLORIS-2019
Classification
Robot
License: BSD-4-Clause

Overview

we provide a new lifelong robotic vision dataset (“OpenLORIS-Object”) collected via RGB-D cameras
mounted on mobile robots. The dataset embeds the challenges faced by a robot in the real-life
application and provides new benchmarks for validating lifelong object recognition algorithms.

Data Collection

Data is obtained in the office and home environments. Several grounded robots mounted by Intel
RealSense D435i and T265 camera is used for data collection. The D435i camera provides aligned
color images and depth images, and IMU measurements. The T265 camera provides stereo fisheye
images and aligned IMU measurements. In the 1st released dataset, we included the RGB and Depth
images.

The robot is actively recording the videos of targeted objects under multiple illuminations,
occlusions, camera-object distances/angles, and context information (clutters). We do include
the common challenges that the robot is usually faced with. For example,

  • Illumination. In
    a real-world application, the illumination can vary significantly across time, e.g., day and
    night differences. We repeat the data collection under weak, normal, and strong lighting conditions,
    respectively. The task becomes challenging with lights to be very weak.
  • Occlusion. Occlusion
    happens when a part of an object is hidden by other objects, or only a portion of the object
    is visible in the field of view. Since distinctive characteristics of the object might be hidden,
    occlusion significantly increases the difficulty for recognition.
  • Object size. Small-size
    or elongated objects make the task challenging, like dry batteries or glue sticks.
  • Camera-object
    angles/distances. The angles of the cameras affect the attributes detected from the object.
  • Clutter. Clutter refers to the presence of other objects in the vicinity of the considered
    object. The simultaneous presence of multiple objects may interfere with the classification
    task.

Data Format

Our released dataset is a collection of 69 instances including 19 categories daily necessities
objects under 7 scenes. For each instance, a 17 seconds video (at 30 fps) has been recorded
with a Realsense D435i camera delivering ~ 500 RGB-D frames. 4 environment factors, each has
3 level changes, are considered explicitly in our dataset, including illumination, occlusion,
clutter and actual pixel sizes of the object in the images. The defined three difficulty levels
for each factor are shown below (thus, totally we have 12 subcategories w.r.t. the environment
factors):

Level Illunimation Occlusion(percentage) Object Pixel Size (pixels) Clutter
1 Strong 0% >200×200 Simple
2 Normal 25% 30×30−200×200 Normal
3 Weak 50% <30×30 Complex

For each instance in each level, we provided 260 samples. That is, for each instance,
there are 3120 samples. The total images provided is around:

260 (samples/intance)× 69 (instance) × 4 (factors/level) × 3 (difficulty levels) = 215,280

Citation

The data provided here is the 1st version for evaluating Lifelong Object Recognition algorithms.
Please cite our paper below in any academic work done with this dataset.

Qi She et al. "OpenLORIS-Object:
A Dataset and Benchmark towards Lifelong Object Recognition". arXiv:1911.06487, 2019
Qi She
et al., "IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]," in
IEEE Robotics & Automation Magazine, vol. 27, no. 2, pp. 11-16, June 2020, doi: 10.1109/MRA.2020.2987186.


@inproceedings{she2019openlorisobject,
    title={ {OpenLORIS-Object}: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning},
    author={Qi She and Fan Feng and Xinyue Hao and Qihan Yang and Chuanlin Lan and Vincenzo
Lomonaco and Xuesong Shi and Zhengwei Wang and Yao Guo and Yimin Zhang and Fei Qiao and Rosa
H. M. Chan},
    booktitle={2020 International Conference on Robotics and Automation (ICRA)},
    year={2020},
    pages={4767-4773},
}
}


@article{9113359,
title={IROS 2019 Lifelong Robotic Vision: Object Recognition Challenge [Competitions]},
author={H. {Bae} and E. {Brophy} and R. H. M. {Chan} and B. {Chen} and F. {Feng}
and G. {Graffieti} and V. {Goel} and X. {Hao} and H. {Han} and S. {Kanagarajah} and S. {Kumar}
and S. {Lam} and T. L. {Lam} and C. {Lan} and Q. {Liu} and V. {Lomonaco} and L. {Ma} and D.
{Maltoni} and G. I. {Parisi} and L. {Pellegrini} and D. {Piyasena} and S. {Pu} and Q. {She}
and D. {Sheet} and S. {Song} and Y. {Son} and Z. {Wang} and T. E. {Ward} and J. {Wu} and M.
{Wu} and D. {Xie} and Y. {Xu} and L. {Yang} and Q. {Yang} and Q. {Zhong} and L. {Zhou}},
 journal={IEEE Robotics  Automation Magazine},
year={2020},
volume={27},
number={2},
pages={11-16},}

License

BSD-4-Clause

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti