graviti
Products
Resources
About us
KITTI-object
2D Box
3D Box
Autonomous Driving
|...
License: CC BY-NC-SA 3.0

Overview

KITTI-object consists of 7481 training point clouds (and images) and 7518 testing point clouds
(and images).

  • The 2D object detection and orientation estimation benchmark, uses 2D bounding
    box overlap to compute precision-recall curves for detection and computes orientation similarity
    to evaluate the orientation estimates in bird's eye view.
  • The 3D object detection benchmark,
    uses 3D bounding box overlap to compute precision-recall curves.
  • The bird's eye view benchmark,
    uses bounding box overlap in bird's eye view to compute precision-recall curves.

Data Collection

Our recording platform is a Volkswagen Passat B6,
which has been modified with actuators for the pedals (acceleration and brake) and the steering
wheel. The data is recorded using an eight core i7 computer equipped with a RAID system, running
Ubuntu Linux and a real-time database. We use the following sensors:

The laser scanner spins at 10 frames per second, capturing approximately 100k points per cycle.
The vertical resolution of the laser scanner is 64. The cameras are mounted approximately level
with the ground plane. The camera images are cropped to a size of 1382 x 512 pixels using libdc's
format 7 mode. After rectification, the images get slightly smaller. The cameras are triggered
at 10 frames per second by the laser scanner (when facing forward) with shutter time adjusted
dynamically (maximum shutter time: 2 ms). Our sensor setup with respect to the vehicle is illustrated
in the following figure. Note that more information on calibration parameters is given in the
calibration files and the development kit (see raw data
section).

img
img

Citation

Please use the following citation when referencing the dataset:

@INPROCEEDINGS{[Geiger2012CVPR](http://www.cvlibs.net/publications/Geiger2012CVPR.pdf),
 author = {[Andreas Geiger](http://www.cvlibs.net/) and [Philip Lenz](http://www.mrt.kit.edu/mitarbeiter_lenz.php)
and [Raquel Urtasun](http://ttic.uchicago.edu/~rurtasun)},
 title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
 booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2012}
}

License

CC BY-NC-SA 3.0

Data Summary
Type
Point Cloud, Image,
Amount
--
Size
118.18GB
Provided by
Max Planck Institute for Intellgent Systems
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems
| Amount -- | Size 118.18GB
KITTI-object
2D Box 3D Box
Autonomous Driving
License: CC BY-NC-SA 3.0

Overview

KITTI-object consists of 7481 training point clouds (and images) and 7518 testing point clouds
(and images).

  • The 2D object detection and orientation estimation benchmark, uses 2D bounding
    box overlap to compute precision-recall curves for detection and computes orientation similarity
    to evaluate the orientation estimates in bird's eye view.
  • The 3D object detection benchmark,
    uses 3D bounding box overlap to compute precision-recall curves.
  • The bird's eye view benchmark,
    uses bounding box overlap in bird's eye view to compute precision-recall curves.

Data Collection

Our recording platform is a Volkswagen Passat B6,
which has been modified with actuators for the pedals (acceleration and brake) and the steering
wheel. The data is recorded using an eight core i7 computer equipped with a RAID system, running
Ubuntu Linux and a real-time database. We use the following sensors:

The laser scanner spins at 10 frames per second, capturing approximately 100k points per cycle.
The vertical resolution of the laser scanner is 64. The cameras are mounted approximately level
with the ground plane. The camera images are cropped to a size of 1382 x 512 pixels using libdc's
format 7 mode. After rectification, the images get slightly smaller. The cameras are triggered
at 10 frames per second by the laser scanner (when facing forward) with shutter time adjusted
dynamically (maximum shutter time: 2 ms). Our sensor setup with respect to the vehicle is illustrated
in the following figure. Note that more information on calibration parameters is given in the
calibration files and the development kit (see raw data
section).

img
img

Citation

Please use the following citation when referencing the dataset:

@INPROCEEDINGS{[Geiger2012CVPR](http://www.cvlibs.net/publications/Geiger2012CVPR.pdf),
 author = {[Andreas Geiger](http://www.cvlibs.net/) and [Philip Lenz](http://www.mrt.kit.edu/mitarbeiter_lenz.php)
and [Raquel Urtasun](http://ttic.uchicago.edu/~rurtasun)},
 title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
 booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2012}
}

License

CC BY-NC-SA 3.0

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti