Poor Visibility Datasets
Sub-Challenge 1 Sub-Challenge 2 Sub-Challenge 3
Track 1: Object Detection in Poor Visibility Environments
About the Dataset
We structure this track into three sub-challenges. Each challenge features a different poor-visibility outdoor condition, and diverse training protocols (paired versus unpaired images, annotated versus unannotated, etc.).
- Dataset and baseline report: Arxiv
Training & Evaluation
In all three sub-challenges, the participant teams are allowed to use external training data that are not mentioned above, including self-synthesized or self-collected data; but they must state so in their submissions. The ranking criteria will be the Mean average precision (mAP) on each testing set, with Interception-of-Union (IoU) threshold as 0.5. If the ratio of the intersection of a detected region with an annotated face region is greater than 0.5, a score of 1 is assigned to the detected region, and 0 otherwise. When mAPs with IoU as 0.5 are equal, the mAPs with higher IoUs (0.6, 0.7, 0.8) will be compared sequentially.
Sub-Challenge 1.1: Object Detection in the Hazy & Rainy Condition
We provide a set of 4,322 real-world hazy images collected from traffic surveillance, all labeled with object bounding boxes and categories (car, bus, bicycle, motorcycle, pedestrian), as the main training and/or validation sets. We also release another set of 4,807 unannotated real-world hazy images collected from the same sources (and containing the same classes of traffic objects, though not annotated), which might be used at the participants’ discretization. There will be a hold-out testing set of 3,000 real-world hazy images, with the same classes of objected annotated.
- Paper: ArXiv
- Release Date: December, 2017
- Download: Benchmarking Single Image Dehazing and Beyond
Sub-Challenge 1.2: Face Detection in the Low-Light Condition
We provide 6,000 real-world low light images captured during the nighttime, at teaching buildings, streets, bridges, overpasses, parks etc., all labeled with bounding boxes for of human face, as the main training and/or validation sets. There will be a hold-out testing set of 4,000 low-light images, with human face bounding boxes annotated.
- Paper: ArXiv
- Release Date: March, 2019
- Download: Extremely Dark Face  Data (Updated!)  Label
Sub-Challenge 1.3: Sea Life Detection in the Underwater Condition
In recent years, many scholars have proposed underwater image enhancement algorithms based on deep learning, but they still cannot meet the needs of practical applications. How to effectively solve the problems of color deviation and blurred details while ensuring the restoration of underwater image information is a major challenge for underwater image enhancement. In this chanllenge, we provide 1,924 underwater images with the corresponding object annotations (including sea urchin, sea cucumber, and scallop) as the training and/or validation sets. A hold-out testing set of 360 real-world underwater images are also collected to evaluate the performance of submitted algorithms.
- Paper: ArXiv
- Release Date: March, 2019
- Download: Underwater Dataset
If you have any questions about this challenge track please feel free to email cvpr2020.ug2challenge@gmail.com