Overview
A robust collection of raw sensor data.
Our autonomous vehicles are equipped with an in-house sensor suite that collects raw sensor data on other cars, pedestrians, traffic lights, and more. This dataset features the raw lidar and camera inputs collected by our autonomous fleet within a bounded geographic area. It includes:
- 3D annotations.(1.3M)
- lidar point clouds.(30K)
- scenes at 60-90 minutes long.(350+)
Data Collection
Level 5’s in-house sensor suite
Lidar
Our vehicles are equipped with 40 and 64-beam lidars on the roof and bumper. They have an Azimuth resolution of 0.2 degrees and jointly produce ~216,000 points at 10 Hz. Firing directions of all lidars are synchronized.
Camera
Our vehicles are also equipped with six 360° cameras built in-house. One long-focal camera points upward. Cameras are synchronized with the lidar so the beam is at the center of each camera’s field of view when images are captured.
Data Format
We use the familiar nuScenes format for our dataset to ensure compatibility with previous work. We’ve also customized the nuScenes devkit and included instructions on how to use it.
Citation
Please use the following citation when referencing the dataset:
@misc{
lyft2019,
title = {Lyft Level 5 Perception Dataset 2020},
author = {Kesten, R. and Usman, M. and Houston, J. and Pandya, T. and Nadhamuni, K. and Ferreira,
A. and Yuan, M. and Low, B. and Jain, A. and Ondruska, P. and Omari, S. and Shah, S. and Kulkarni,
A. and Kazakova, A. and Tao, C. and Platinsky, L. and Jiang, W. and Shet, V.},
year = {2019},
howpublished = {\url{https://level5.lyft.com/dataset/}}
}