graviti
Products
Resources
About us
MSRA10K
2D Polygon
Common
|...
License: Unknown

Overview

The MSRA Salient Object Database, which originally provides salient object annotation in terms
of bounding boxes provided by 3-9 users, is widely used in salient object detection and segmentation
community.

Data Collection

The MSRA10K benchmark dataset (a.k.a. THUS10000) comprises of per-pixel ground truth
annotation for 10, 000 MSRA images (181 MB), each of
which has an unambiguous salient object and the object region is accurately annotated with
pixel wise ground-truth labeling (13.1M). We provide saliency
maps
(5.3 GB containing 170, 000 image) for our methods as
well as other 15 state of the art methods, including FT [1], AIM [2], MSS [3], SEG [4], SeR
[5], SUN [6], SWD [7], IM [8], IT [9], GB [10], SR [11], CA [12], LC [13], AC [14], and CB
[15]. Saliency segmentation (71.3MB) results for FT[1],
SEG[4], and CB[10] are also available.

Citation

@article{ChengPAMI,
  author = {Ming-Ming Cheng and Niloy J. Mitra and Xiaolei Huang and Philip H. S. Torr and
Shi-Min Hu},
  title = {Global Contrast based Salient Region Detection},
  year  = {2015},
  journal= {IEEE TPAMI},
  volume={37},
  number={3},
  pages={569--582},
  doi = {10.1109/TPAMI.2014.2345401},
}


@conference{13iccv/Cheng_Saliency,
  title={Efficient Salient Region Detection with Soft Image Abstraction},
  author={Ming-Ming Cheng and Jonathan Warrell and Wen-Yan Lin and Shuai Zheng
and Vibhav Vineet and Nigel Crook},
  booktitle={IEEE ICCV},
  pages={1529--1536},
  year={2013},
}


@article{SalObjSurvey,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Survey},
  journal = {ArXiv e-prints},
  archivePrefix = {arXiv},
  eprint = {arXiv:1411.5878},
  year = {2014},
}


@article{SalObjBenchmark,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Benchmark},
  journal = {IEEE TIP},
  year={2015},
  volume={24},
  number={12},
  pages={5706-5722},
  doi={10.1109/TIP.2015.2487833},
}
Data Summary
Type
Image,
Amount
--
Size
108.85MB
Provided by
College of Computer Science, Nankai University
Ming-Ming Cheng is a professor with the College of Computer Science, Nankai University, leading the Media Computing Lab. He received his Ph.D. degree from Tsinghua University in 2012 and then worked with Prof. Philip Torr in Oxford for 3 years.
| Amount -- | Size 108.85MB
MSRA10K
2D Polygon
Common
License: Unknown

Overview

The MSRA Salient Object Database, which originally provides salient object annotation in terms
of bounding boxes provided by 3-9 users, is widely used in salient object detection and segmentation
community.

Data Collection

The MSRA10K benchmark dataset (a.k.a. THUS10000) comprises of per-pixel ground truth
annotation for 10, 000 MSRA images (181 MB), each of
which has an unambiguous salient object and the object region is accurately annotated with
pixel wise ground-truth labeling (13.1M). We provide saliency
maps
(5.3 GB containing 170, 000 image) for our methods as
well as other 15 state of the art methods, including FT [1], AIM [2], MSS [3], SEG [4], SeR
[5], SUN [6], SWD [7], IM [8], IT [9], GB [10], SR [11], CA [12], LC [13], AC [14], and CB
[15]. Saliency segmentation (71.3MB) results for FT[1],
SEG[4], and CB[10] are also available.

Citation

@article{ChengPAMI,
  author = {Ming-Ming Cheng and Niloy J. Mitra and Xiaolei Huang and Philip H. S. Torr and
Shi-Min Hu},
  title = {Global Contrast based Salient Region Detection},
  year  = {2015},
  journal= {IEEE TPAMI},
  volume={37},
  number={3},
  pages={569--582},
  doi = {10.1109/TPAMI.2014.2345401},
}


@conference{13iccv/Cheng_Saliency,
  title={Efficient Salient Region Detection with Soft Image Abstraction},
  author={Ming-Ming Cheng and Jonathan Warrell and Wen-Yan Lin and Shuai Zheng
and Vibhav Vineet and Nigel Crook},
  booktitle={IEEE ICCV},
  pages={1529--1536},
  year={2013},
}


@article{SalObjSurvey,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Survey},
  journal = {ArXiv e-prints},
  archivePrefix = {arXiv},
  eprint = {arXiv:1411.5878},
  year = {2014},
}


@article{SalObjBenchmark,
  author = {Ali Borji and Ming-Ming Cheng and Huaizu Jiang and Jia Li},
  title = {Salient Object Detection: A Benchmark},
  journal = {IEEE TIP},
  year={2015},
  volume={24},
  number={12},
  pages={5706-5722},
  doi={10.1109/TIP.2015.2487833},
}
0
Start building your AI now
graviti
wechat-QR
Long pressing the QR code to follow wechat official account

Copyright@Graviti