Loading...
Menu
Bridging the Gap Between Computational Photography and Visual Recognition:
UG2+ Prize Challenge
CVPR 2020
$6K in prizes

The rapid development of computer vision algorithms increasingly allows automatic visual recognition to be incorporated into a suite of emerging applications. Some of these applications have less-than-ideal circumstances such as low-visibility environments, causing image captures to have degradations. In other more extreme applications, such as imagers for flexible wearables, smart clothing sensors, ultra-thin headset cameras, implantable in vivo imaging, and others, standard camera systems cannot even be deployed, requiring new types of imaging devices. Computational photography addresses the concerns above by designing new computational techniques and incorporating them into the image capture and formation pipeline. This raises a set of new questions. For example, what is the current state-of-the-art for image restoration for images captured in non-ideal circumstances? How can inference be performed on novel kinds of computational photography devices?

Continuing the success of the 1st (CVPR'18) and 2nd (CVPR'19) UG2 Prize Challenge workshops, we provide its 3rd version for CVPR 2020. It will inherit the successful benchmark dataset, platform and evaluation tools used by the first two UG2 workshops, but will also look at brand new aspects of the overall problem, significantly augmenting its existing scope.

Original high-quality contributions are solicited on the following topics:
  • Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms (UAVs, gliders, autonomous cars, outdoor robots etc.), under real-world adverse conditions and image degradations (haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.)
  • Novel models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography tasks and various high-level computer vision tasks, and for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.
  • Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean “ground truth” to compare with.

Sponsors

Walmart kuaishou CSIG

Challenge Categories

Winners

$K

Awarded in prizes

Keynote speakers

Available Challenges

What is the current state-of-the-art for image restoration for images captured in non-ideal circumstances? How can inference be performed on novel kinds of computational photography devices?

The UG2+ Challenge seeks to advance the analysis of "difficult" imagery by applying image restoration and enhancement algorithms to improve analysis performance. Participants are tasked with developing novel algorithms to improve the analysis of imagery captured under problematic conditions.

Object Detection in Poor Visibility Environments

While most current vision systems are designed to perform in environments where the subjects are well observable without (significant) attenuation or alteration, a dependable vision system must reckon with the entire spectrum of complex unconstrained and dynamic degraded outdoor environments. It is highly desirable to study to what extent, and in what sense, such challenging visual conditions can be coped with, for the goal of achieving robust visual sensing

Sub-Challenges

  1. Object Detection in the Hazy & Rainy Condition
  2. Face Detection in the Low-Light Condition
  3. Sea Life Detection in the Underwater Condition

Face Verification on FlatCam Images

The FlatCam is a lensless camera whose thin and inexpensive design allows its easy integration into numerous applications for various computer vision tasks. However, FlatCam images contain noise and artifacts unseen in standard lens-based cameras, an issue that degrades its performance in such tasks. This challenge explores new algorithms to better integrate lensless cameras into computer vision applications by using face verification as a working example.

Sub-Challenges

  1. Image Enhancement for FlatCam Face Verification
  2. Image Reconstruction for FlatCam Face Verification
  3. End-to-End Face Verification on FlatCam Measurements

Keynote speakers

Speaker Photo
Judy Hoffman
Georgia Institute of Technology
Speaker Photo
Xiaoming Liu
Michigan State University
Speaker Photo
Vishal M. Patel
Johns Hopkins University
Speaker Photo
Zhiding Yu
NVIDIA
Speaker Photo
Dengxin Dai
ETH Zurich
Speaker Photo
Bihan Wen
Nanyang Technological University (NTU), Singapore
Speaker Photo
Humphrey Shi
University of Oregon
Speaker Photo
Xi Yin
Microsoft Cloud and AI

Important Dates

Challenge Registration

January 15 - March 1, 2020

DevKit available

February 15, 2020

Challenge Dry-run

April 1 - April 15, 2020

Paper Submission Deadline

April 12, 2020

Notification of Paper Acceptance

April 15, 2020

Paper Camera Ready

April 17, 2020

Challenge Final Result Submission

April 15 - May 8, 2020

Challenge Winners Announcement

May 15, 2020

CVPR Workshop

June 19, 2020

Organizers

Speaker Photo
Zhangyang Wang
Texas A&M University
Speaker Photo
Walter J. Scheirer
University of Notre Dame
Speaker Photo
Ashok Veeraraghavan
Rice University
Speaker Photo
Jiaying Liu
Peking University
Speaker Photo
Risheng Liu
Dalian University of Technology
Speaker Photo
Wenqi Ren
Chinese Academy of Sciences
Speaker Photo
Wenhan Yang
City University of Hong Kong, Hong Kong
Speaker Photo
Yingyan Lin
Rice University
Speaker Photo
Ye Yuan
Texas A&M University
Speaker Photo
Jasper Tan
Rice University
Speaker Photo
Wuyang Chen
Texas A&M University
Footer