Track 1: Object Detection in Poor Visibility Environments
Register for this track
Many emerging applications, such as UAVs, autonomous/assisted driving, search and rescue robots, environment monitoring, security surveillance, transportation and inspection, hinge on computer vision-based sensing and understanding of outdoor environments. Such systems concern a wide range of target tasks such as detection, recognition, segmentation, tracking, and parsing. However, the performances of visual sensing and understanding algorithms will be largely jeopardized by various challenging conditions in unconstrained and dynamic degraded environments, e.g., moving platforms, bad weathers, and poor illumination. While most current vision systems are designed to perform in “clear” environments, i.e., where subjects are well observable without (significant) attenuation or alteration, a dependable vision system must reckon with the entire spectrum of complex unconstrained outdoor environments. Taking autonomous driving for example: the industry players have been tackling the challenges posed by inclement weathers; however, a heavy rain, haze or snow will still obscure the vision of on-board cameras and create confusing reflections and glare, leaving the state-of-the-art self-driving cars in struggle. Another illustrative example can be found in city surveillance: even the commercialized cameras adopted by governments appear fragile in challenging weather conditions. Therefore, it is highly desirable to study to what extent, and in what sense, such challenging visual conditions can be coped with, for the goal of achieving robust visual sensing and understanding in the wild, that benefit security/safety, autonomous driving, robotics, and an even broader range of signal and image processing applications
Despite the blooming research on removing or alleviating the impacts of those challenging, such as dehazing, rain removal and illumination enhancement, a unified view towards those problems has been absent, so have collective efforts for resolving their common bottlenecks. On one hand, such challenging visual conditions usually give rise to nonlinear and data-dependent degradations that will be much more complicated than the well-studied noise or motion blur, which follow some parameterized physical models a priori. That will naturally motivate a combination of model-based and data-driven approaches. On the other hand, it should be noted that most existing research works cast the handling of those challenging conditions as a post-processing step of signal restoration or enhancement after sensing, and then feed the restored data for visual understanding. The performance of high-level visual understanding tasks will thus largely depend on the quality of restoration or enhancement.It remains questionable whether restoration-based approaches would actually boost the visual understanding performance, as the restoration/enhancement step is not optimized towards the target task and may bring in misleading information and artifacts too.
UG2+ Challenge 1 aims to evaluate and advance object detection algorithms’ robustness in specific poor-visibility environmental situations including challenging weather and lighting conditions. It consists of the three sub-challenges below:
- (Semi-)Supervised Object Detection in Haze Conditions
- (Semi-)Supervised Face Detection in Low Light Conditions
- Sea Life Detection in the Underwater Condition
In all three sub-challenges, the participant teams are allowed to use external training data that are not mentioned above, including self-synthesized or self-collected data; but they must state so in their submissions. Each leaderboard will be divided into two ranking lists: with and without external data. The ranking criteria will be the Mean average precision (mAP) on each testing set, with Interception-of-Union (IoU) threshold as 0.5. If the ratio of the intersection of a detected region with an annotated face region is greater than 0.5, a score of 1 is assigned to the detected region, and 0 otherwise. When mAPs with IoU as 0.5 are equal, the mAPs with higher IoUs (0.6, 0.7, 0.8) will be compared sequentially.
If you have any questions about this challenge track please feel free to email cvpr2020.ug2challenge@gmail.com
The Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) is not a sponsor for this track prizes and has not provided any funding in support of the challenge track. IARPA is not involved in the planning, execution, evaluation, or awarding of prizes for the Track 1 challenge.
Footer