graviti
Products
Resources
About us
MVSEC
Action/Event Detection
|...
License: Unknown

Overview

The Multi Vehicle Stereo Event Camera dataset is a collection of data designed for the development
of novel 3D perception algorithms for event based cameras. Stereo event data is collected
from car, motorbike, hexacopter and handheld data, and fused with lidar, IMU, motion capture
and GPS to provide ground truth pose and depth images. In addition, we provide images from
a standard stereo frame based camera pair for comparison with traditional techniques.

Event
based cameras are a new asynchronous sensing modality that measure changes in image intensity.
When the log intensity over a pixel changes above a set threshold, the camera immediately
returns the pixel location of a change, along with a timestamp with microsecond accuracy,
and the direction of the change (up or down). This allows for sensing with extremely low latency.
In addition, the cameras have extremely high dynamic range and low power usage.

Data Format

ROS Bag Data Format

Each sequence consists of a data ROS bag, with the following topics:

  • /davis/left/events (dvs_msgs/EventArray) - Events from the left DAVIS camera.
  • /davis/left/image_raw (sensor_msgs/Image) - Grayscale images from the left DAVIS camera.
  • /davis/left/imu (sensor_msgs/Imu) - IMU readings from the left DAVIS camera.
  • /davis/right/events (dvs_msgs/EventArray) - Events from the right DAVIS camera.
  • /davis/right/image_raw (sensor_msgs/Image) - Grayscale images from the right DAVIS camera.
  • /davis/right/imu (sensor_msgs/Imu) - IMU readings from the right DAVIS camera.
  • /velodyne_point_cloud (sensor_msgs/PointCloud2) - Point clouds from the Velodyne (one
    per spin).
  • /visensor/left/image_raw (sensor_msgs/Image) - Grayscale images from the
    left VI-Sensor camera.
  • /visensor/right/image_raw (sensor_msgs/Image) - Grayscale images
    from the right VI-Sensor camera.
  • /visensor/imu (sensor_msgs/Imu) - IMU readings from the VI-Sensor.
  • /visensor/cust_imu (visensor_node/visensor_imu) - Full
    IMU readings from the VI-Sensor (including magnetometer, temperature and pressure).

Two sets
of custom messages are used, dvs_msgs/EventArray from rpg_dvs_ros and visensor_node/visensor_imu
from visensor_node. The visensor_node package is optional if you do not need the extra IMU
outputs (magnetometer, temperature and pressure.

In addition, each corresponding ground truth bag contains the following topics:

  • /davis/left/depth_image_raw (sensor_msgs/Image)
  • Depth maps for the left DAVIS camera at a given timestamp (note, these images are saved using
    the CV_32FC1 format (i.e. floats).
  • /davis/left/depth_image_rect (sensor_msgs/Image)
  • Rectified depth maps for the left DAVIS camera at a given timestamp (note, these images are
    saved using the CV_32FC1 format (i.e. floats).
  • /davis/left/blended_image_rect (sensor_msgs/Image)
  • Visualization of all events from the left DAVIS that are 25ms from each left depth map superimposed
    on the depth map. This message gives a preview of what each sequence looks like.
  • /davis/left/odometry
    (geometry_msgs/PoseStamped) - Pose output using LOAM. These poses are locally consistent, but
    may experience drift over time. Used to stitch point clouds together to generate depth maps.
  • /davis/left/pose (geometry_msgs/PoseStamped) - Pose output using Google Cartographer.
    These poses are globally loop closed, and can be assumed to have minimal drift. Note that these
    poses were optimized using Cartographer’s 2D mapping, which does not optimize over the height
    (Z axis). Pitch and roll are still optimized, however.
  • /davis/right/depth_image_raw
    (sensor_msgs/Image) - Depth maps for the right DAVIS camera at a given timestamp.
  • /davis/right/depth_image_rect
    (sensor_msgs/Image) - Rectified depth maps for the right DAVIS camera at a given timestamp.
  • /davis/right/blended_image_rect (sensor_msgs/Image) - Visualization of all events from
    the right DAVIS that are 25ms from each right depth map superimposed on the depth map. This
    message gives a preview of what each sequence looks like.

Citation

Please use the following citation when referencing the dataset:

@article{DBLP:journals/corr/abs-1801-10202,
  author    = {Alex Zihao Zhu and
               Dinesh Thakur and
               Tolga {\"{O}}zaslan and
               Bernd Pfrommer and
               Vijay Kumar and
               Kostas Daniilidis},
  title     = {The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset
               for 3D Perception},
  journal   = {CoRR},
  volume    = {abs/1801.10202},
  year      = {2018},
  url       = {http://arxiv.org/abs/1801.10202},
  archivePrefix = {arXiv},
  eprint    = {1801.10202},
  timestamp = {Mon, 13 Aug 2018 16:47:55 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1801-10202.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

For the ground truth optical flow, please cite:

@article{DBLP:journals/corr/abs-1802-06898,
  author    = {Alex Zihao Zhu and
               Liangzhe Yuan and
               Kenneth Chaney and
               Kostas Daniilidis},
  title     = {EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based
               Cameras},
  journal   = {CoRR},
  volume    = {abs/1802.06898},
  year      = {2018},
  url       = {http://arxiv.org/abs/1802.06898},
  archivePrefix = {arXiv},
  eprint    = {1802.06898},
  timestamp = {Mon, 13 Aug 2018 16:47:54 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1802-06898.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Data Summary
Type
Point Cloud, Image,
Amount
--
Size
--
Provided by
Alex Zhu
Alex Zhu is PhD from the University of Pennsylvania working on algorithms for event based cameras. I am now a Researcher at Waymo.
| Amount -- | Size --
MVSEC
Action/Event Detection
License: Unknown

Overview

The Multi Vehicle Stereo Event Camera dataset is a collection of data designed for the development
of novel 3D perception algorithms for event based cameras. Stereo event data is collected
from car, motorbike, hexacopter and handheld data, and fused with lidar, IMU, motion capture
and GPS to provide ground truth pose and depth images. In addition, we provide images from
a standard stereo frame based camera pair for comparison with traditional techniques.

Event
based cameras are a new asynchronous sensing modality that measure changes in image intensity.
When the log intensity over a pixel changes above a set threshold, the camera immediately
returns the pixel location of a change, along with a timestamp with microsecond accuracy,
and the direction of the change (up or down). This allows for sensing with extremely low latency.
In addition, the cameras have extremely high dynamic range and low power usage.

Data Format

ROS Bag Data Format

Each sequence consists of a data ROS bag, with the following topics:

  • /davis/left/events (dvs_msgs/EventArray) - Events from the left DAVIS camera.
  • /davis/left/image_raw (sensor_msgs/Image) - Grayscale images from the left DAVIS camera.
  • /davis/left/imu (sensor_msgs/Imu) - IMU readings from the left DAVIS camera.
  • /davis/right/events (dvs_msgs/EventArray) - Events from the right DAVIS camera.
  • /davis/right/image_raw (sensor_msgs/Image) - Grayscale images from the right DAVIS camera.
  • /davis/right/imu (sensor_msgs/Imu) - IMU readings from the right DAVIS camera.
  • /velodyne_point_cloud (sensor_msgs/PointCloud2) - Point clouds from the Velodyne (one
    per spin).
  • /visensor/left/image_raw (sensor_msgs/Image) - Grayscale images from the
    left VI-Sensor camera.
  • /visensor/right/image_raw (sensor_msgs/Image) - Grayscale images
    from the right VI-Sensor camera.
  • /visensor/imu (sensor_msgs/Imu) - IMU readings from the VI-Sensor.
  • /visensor/cust_imu (visensor_node/visensor_imu) - Full
    IMU readings from the VI-Sensor (including magnetometer, temperature and pressure).

Two sets
of custom messages are used, dvs_msgs/EventArray from rpg_dvs_ros and visensor_node/visensor_imu
from visensor_node. The visensor_node package is optional if you do not need the extra IMU
outputs (magnetometer, temperature and pressure.

In addition, each corresponding ground truth bag contains the following topics:

  • /davis/left/depth_image_raw (sensor_msgs/Image)
  • Depth maps for the left DAVIS camera at a given timestamp (note, these images are saved using
    the CV_32FC1 format (i.e. floats).
  • /davis/left/depth_image_rect (sensor_msgs/Image)
  • Rectified depth maps for the left DAVIS camera at a given timestamp (note, these images are
    saved using the CV_32FC1 format (i.e. floats).
  • /davis/left/blended_image_rect (sensor_msgs/Image)
  • Visualization of all events from the left DAVIS that are 25ms from each left depth map superimposed
    on the depth map. This message gives a preview of what each sequence looks like.
  • /davis/left/odometry
    (geometry_msgs/PoseStamped) - Pose output using LOAM. These poses are locally consistent, but
    may experience drift over time. Used to stitch point clouds together to generate depth maps.
  • /davis/left/pose (geometry_msgs/PoseStamped) - Pose output using Google Cartographer.
    These poses are globally loop closed, and can be assumed to have minimal drift. Note that these
    poses were optimized using Cartographer’s 2D mapping, which does not optimize over the height
    (Z axis). Pitch and roll are still optimized, however.
  • /davis/right/depth_image_raw
    (sensor_msgs/Image) - Depth maps for the right DAVIS camera at a given timestamp.
  • /davis/right/depth_image_rect
    (sensor_msgs/Image) - Rectified depth maps for the right DAVIS camera at a given timestamp.
  • /davis/right/blended_image_rect (sensor_msgs/Image) - Visualization of all events from
    the right DAVIS that are 25ms from each right depth map superimposed on the depth map. This
    message gives a preview of what each sequence looks like.

Citation

Please use the following citation when referencing the dataset:

@article{DBLP:journals/corr/abs-1801-10202,
  author    = {Alex Zihao Zhu and
               Dinesh Thakur and
               Tolga {\"{O}}zaslan and
               Bernd Pfrommer and
               Vijay Kumar and
               Kostas Daniilidis},
  title     = {The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset
               for 3D Perception},
  journal   = {CoRR},
  volume    = {abs/1801.10202},
  year      = {2018},
  url       = {http://arxiv.org/abs/1801.10202},
  archivePrefix = {arXiv},
  eprint    = {1801.10202},
  timestamp = {Mon, 13 Aug 2018 16:47:55 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1801-10202.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

For the ground truth optical flow, please cite:

@article{DBLP:journals/corr/abs-1802-06898,
  author    = {Alex Zihao Zhu and
               Liangzhe Yuan and
               Kenneth Chaney and
               Kostas Daniilidis},
  title     = {EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based
               Cameras},
  journal   = {CoRR},
  volume    = {abs/1802.06898},
  year      = {2018},
  url       = {http://arxiv.org/abs/1802.06898},
  archivePrefix = {arXiv},
  eprint    = {1802.06898},
  timestamp = {Mon, 13 Aug 2018 16:47:54 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1802-06898.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
1
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti