graviti
Products
Resources
About us
LISA Traffic Light
2D Box
Autonomous Driving
|...
License: CC BY-NC-SA 4.0

Overview

The database is collected in San Diego, California, USA. The database provides four day-time
and two night-time sequences primarily used for testing, providing 23 minutes and 25 seconds
of driving in Pacific Beach and La Jolla, San Diego. The stereo image pairs are acquired using
the Point Grey’s Bumblebee XB3 (BBX3-13S2C-60) which contains three lenses which capture images
with a resolution of 1280 x 960, each with a Field of View(FoV) of 66°. Where the left camera
view is used for all the test sequences and training clips. The training clips consists of
13 daytime clips and 5 nighttime clips.

Data Annotation

The annotation.zip contains are two types of annotation present for each sequence and clip.
The first annotation type contains information of the entire TL area and what state the TL
is in. This annotation file is called frameAnnotationsBOX, and is generated from the second
annotation file by enlarging all annotation larger than 4x4. The second one is annotation marking
only the area of the traffic light which is lit and what state it is in. This second annotation
file is called frameAnnotationsBULB.

The annotations are stored as 1 annotation per line with
the addition of information such as class tag and file path to individual image files. With
this structure the annotations are stored in a csv file, where the structure is exemplified
in below listing:

Filename;Annotation tag;Upper left corner X;Upper left corner Y;Lower right
corner X;Lower right corner Y;Origin file;Origin frame number;Origin track;Origin track frame
number

Citation

@article{jensen2016vision,
  title={Vision for looking at traffic lights: Issues, survey, and perspectives},
  author={Jensen, Morten Born{\o} and Philipsen, Mark Philip and M{\o}gelmose, Andreas and
Moeslund, Thomas Baltzer and Trivedi, Mohan Manubhai},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  volume={17},
  number={7},
  pages={1800--1815},
  year={2016},
  doi={10.1109/TITS.2015.2509509},
  publisher={IEEE}
}
@inproceedings{philipsen2015traffic,
  title={Traffic light detection: A learning algorithm and evaluations on challenging dataset},
  author={Philipsen, Mark Philip and
Jensen, Morten Born{\o} and M{\o}gelmose, Andreas and Moeslund, Thomas B and Trivedi, Mohan
M},
  booktitle={intelligent transportation systems (ITSC), 2015 IEEE 18th international conference
on},
  pages={2341--2345},
  year={2015},
  organization={IEEE}
}

License

CC BY-NC-SA 4.0

Data Summary
Type
Image,
Amount
43.007K
Size
4.21GB
Provided by
Computer Vision & Robotics Research Laboratory at University of California, San Diego
The Laboratory for Intelligent and Safe Automobiles (LISA) is a multidisciplinary effort to explore innovative approaches to making future automobiles safer and "intelligent".
| Amount 43.007K | Size 4.21GB
LISA Traffic Light
2D Box
Autonomous Driving
License: CC BY-NC-SA 4.0

Overview

The database is collected in San Diego, California, USA. The database provides four day-time
and two night-time sequences primarily used for testing, providing 23 minutes and 25 seconds
of driving in Pacific Beach and La Jolla, San Diego. The stereo image pairs are acquired using
the Point Grey’s Bumblebee XB3 (BBX3-13S2C-60) which contains three lenses which capture images
with a resolution of 1280 x 960, each with a Field of View(FoV) of 66°. Where the left camera
view is used for all the test sequences and training clips. The training clips consists of
13 daytime clips and 5 nighttime clips.

Data Annotation

The annotation.zip contains are two types of annotation present for each sequence and clip.
The first annotation type contains information of the entire TL area and what state the TL
is in. This annotation file is called frameAnnotationsBOX, and is generated from the second
annotation file by enlarging all annotation larger than 4x4. The second one is annotation marking
only the area of the traffic light which is lit and what state it is in. This second annotation
file is called frameAnnotationsBULB.

The annotations are stored as 1 annotation per line with
the addition of information such as class tag and file path to individual image files. With
this structure the annotations are stored in a csv file, where the structure is exemplified
in below listing:

Filename;Annotation tag;Upper left corner X;Upper left corner Y;Lower right
corner X;Lower right corner Y;Origin file;Origin frame number;Origin track;Origin track frame
number

Citation

@article{jensen2016vision,
  title={Vision for looking at traffic lights: Issues, survey, and perspectives},
  author={Jensen, Morten Born{\o} and Philipsen, Mark Philip and M{\o}gelmose, Andreas and
Moeslund, Thomas Baltzer and Trivedi, Mohan Manubhai},
  journal={IEEE Transactions on Intelligent Transportation Systems},
  volume={17},
  number={7},
  pages={1800--1815},
  year={2016},
  doi={10.1109/TITS.2015.2509509},
  publisher={IEEE}
}
@inproceedings{philipsen2015traffic,
  title={Traffic light detection: A learning algorithm and evaluations on challenging dataset},
  author={Philipsen, Mark Philip and
Jensen, Morten Born{\o} and M{\o}gelmose, Andreas and Moeslund, Thomas B and Trivedi, Mohan
M},
  booktitle={intelligent transportation systems (ITSC), 2015 IEEE 18th international conference
on},
  pages={2341--2345},
  year={2015},
  organization={IEEE}
}

License

CC BY-NC-SA 4.0

0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti