Menu
Loading...
Link to the final submission form was sent via email. If you did not receive it, please email cvpr2020.ug2challenge.track2@gmail.com

Timetable

  • January 31, 2020: Development kit and registration made available
  • March 15, 2020: Registration deadline
  • April 1-8, 2020: Dry run period**
  • April 8, 2020: Submissions deadline
** Every contestant will have the opportunity to verify before the submission window closes that their Docker container submission works. This dry run is completely optional but is highly recommended.

Prizes

UG2+ evaluates algorithms for FlatCam image enhancement, reconstruction, and face verification. The most successful and innovative teams will be invited to present at the CVPR 2020 workshop. We provide three sub-challenge categories:

  1. FlatCam for Faces: Enhancement, Reconstruction, and Verification
    1. Image Enhancement for FlatCam Face Verification
      • 1st Place: $500
      • 2nd Place: $300
      • 3rd Place: $200
    2. Image Reconstruction for FlatCam Face Verification
      • 1st Place: $500
      • 2nd Place: $300
      • 3rd Place: $200
    3. End-to-End Verification on FlatCam Measurements
      • 1st Place: $500
      • 2nd Place: $300
      • 3rd Place: $200
For a total of $???K awarded in prizes.

Background Information

In this challenge, you will explore performing face verification using FlatCam images. In face verification, an algorithm takes as input two images, each containing a face, and must then output whether the images contain the same identity or not. Face verification has been an active area of research with many algorithms that currently exist. While most of these off-the-shelf algorithms perform with very high accuracy for face images captured with standard lens-based cameras, they do not perform very well for FlatCam images. The FlatCam is a lensless camera whose measurements do not resemble an image. An image reconstruction algorithm based on Tikhonov regularization (detailed here) maps the measurements into a recognizable image. The resulting reconstruction, which we call the Tikhonov reconstruction, contains unique artifacts and noise, which may be the reason of decreased face verification accuracy.

There are three possible approaches to deal with this issue. First, one can enhance the Tikhonov reconstruction to output images that better match those produced by standard lens-based cameras. Second, one can design better reconstruction algorithms for the FlatCam measurements. Lastly, one can design new verification algorithms that can perform better for FlatCam images. These approaches are explored in the following three subchallenges:

  1. Sub-Challenge 2.1: Image Enhancement for FlatCam Face Verification. You will be given Tikhonov reconstructed FlatCam images, and you must design an algorithm that enhances these images to improve the performance of face verification algorithms performed on them. To gauge the effectiveness of your algorithm, 3 off-the-shelf face verification algorithms will be applied to Tikhonov reconstructed FlatCam images from a hold-out test set pre-processed by your enhancement algorithm. Your score will be the average accuracy rate of the face verification algorithms. Note that you will not design a face verification algorithm or a reconstruction algorithm.
  2. Sub-Challenge 2.2: Image Reconstruction for FlatCam Face Verification. You will design a new reconstruction algorithm that maps raw FlatCam sensor measurements into face images that are then input to a face verification algorithm. You will be given calibration information for the FlatCam prototype used, as well as code for the standard reconstruction algorithm, which you may or may not use. You will not design a novel face verification algorithm. For testing, your algorithm would be applied to the sensor measurements of a hold-out test set whose outputs are then passed into 3-4 off-the-shelf face verification algorithms. Your score will be the average accuracy rate of the face verification algorithms.
  3. Sub-Challenge 2.3: End-to-End Face Verification on FlatCam Measurements. You will design an end-to-end procedure for performing face verification directly on FlatCam sensor measurements. Your should take in a pair of sensor measurements and output whether the two images contain the same identity or not. No image reconstruction nor enhancement component is necessary in the final pipeline, although they are also not forbidden. Your score will be the accuracy rate of your predictions (number of correct predictions divided by total image pairs).

Submission Process

To submit your results to our leaderboard, follow these steps:

  1. Ensure that your team has registered using this registration form: LINK
  2. Download available training data from: Training Data.
  3. Create a standalone Docker image that contains your models and code for running your algorithm following these requirements:
    • Sub-challenge 2.1: Your Docker image must contain the file "/root/ug2_challenge2-1_submission#.sh" (# = 1, 2, or 3 and refers to the number of submission). When run, this script must enhance all the images (png format) in the directory "/root/challenge2-1_test_input" and save the outputs in "/root/challenge2-1_test_output/submission#", keeping all the image names the same. In test time, we will mount the folder "/root/challenge2-1_test_input" containing the reconstructed FlatCam images of the hold-out test set from which your algorithm will be evaluated. We will also mount a file "/root/test_filenames.txt" that will contain the filename of each image in "/root/challenge2-1_test_input" that you may use. See the Devkit for examples.
    • Sub-challenge 2.2: Your Docker image must contain the file "/root/ug2_challenge2-2_submission#.sh" (# = 1, 2, or 3 and refers to the number of submission). When run, this script must reconstruct all the FlatCam measurements (png format) in the directory "/root/challenge2-2_test_input" and save the outputs in "/root/challenge2-2_test_output/submission#", keeping all the image names the same. In test time, we will mount the folder "/root/challenge2-2_test_input" containing the FlatCam measurements of the hold-out test set from which your algorithm will be evaluated. We will also mount a file "/root/test_filenames.txt" that will contain the filename of each image in "/root/challenge2-2_test_input" that you may use. A sample Docker image is provided with the DevKit as a guide.
    • Sub-challenge 2.3: Your Docker image must contain the file "/root/ug2_challenge2-3_submission#.sh" (# = 1, 2, or 3 and refers to the number of submission). In test time, we will mount the following files:
      • /root/challenge2-3_test_pairs.txt: a text file where each line contains a space-separated pair of filenames of FlatCam measurements (absolute path).
      • /root/challenge2-3_test_input: a directory containing the images referred to in /root/challenge2-3_test_pairs.txt
      Your submission script, when run, must output a text file "/root/ug2_challenge2-3_test_output_submission#.txt" with the same number of lines as "/root/challenge2-3_test_pairs.txt" wherein each line is either 1 or 0 pertaining to whether your algorithm predicts that the pair of images in the text file contains the same identity or not, respectively.
  4. Upload your Docker image to Docker Hub and submit your docker link to the corresponding sub challenge. Check your uploaded docker with the testing code provided in our DevKits to make sure that the uploaded docker can work correctly. Ensure that your code can be run using the hardware listed under the Requirements section.
  5. Each team may submit up to 3 algorithms per sub challenge. All submissions (per sub challenge) will be held within one single Docker image to be uploaded to Docker Hub. The Docker image will contain all dependencies and code required to perform the model’s operation and will execute the model(s) contained upon run.

Development Kit

The Development Kit (LINK) contains three components:

  • Python and Matlab codes for standard FlatCam reconstruction as detailed here
  • Sample evaluation script with sample test images. This will only check that your code runs properly and will not do the actual evaluation during test. The sample test images provided are randomly drawn from the training sets and do not contain any evaluation images that will be used in the actual test time.
  • A sample Docker image with correctly formatted submissions. Your Docker image need not inherit from this one.

Each team must submit one algorithm for each challenge they wish to participate in. Participants who have investigated several algorithms may submit up to 3 algorithms per subchallenge. All submissions (per subchallenge) will be held within one single Docker container to be uploaded to Docker Hub. The Docker container will container all dependencies and code required to perform the model’s operation and will execute the model(s) contained upon run.

The input images will be provided to the container at run time through Docker’s mounting option, as will the output folders for the model(s) to save their results.

Requirements

Software
  • Docker-CE
  • NVIDIA Docker
Hardware

The proposed algorithms should be able to run in systems with:

  • Up to and including RTX 2080 Ti 11 GB
  • Up to and including 12 cores
  • Up to and including 32gb memory

If you have any questions about this challenge track please feel free to email cvpr2020.ug2challenge.track2@gmail.com

Rules

Read carefully the following guidelines before submitting. Methods not complying with the guidelines will be disqualified.

  • We encourage participants to use the provided training and validation data for each task, as well as to make use of their own data or data from other sources for training. However the use any form of annotation or use of any of the provided benchmarks test sets for either supervised or unsupervised training is strictly forbidden.
  • Participants who have investigated several algorithms may submit up to 3 algorithms per challenge. Only a single submission per participant can be the winner of a single sub-challenge. Changes in algorithm parameters do not constitute a different method, all parameter tuning must be conducted using the dataset provided and any additional data the participants consider appropriate.
  • If an algorithm is submitted that makes inefficient use of deep learning, we may not be able to process the entire test set due to time constraints. To avoid this possibility, we will place limits on per-image runtime.
  • Results can vary based on the GPU used. We noticed that results varied slightly across GPU models. We are restricting our evaluation to one model (RTX 2080 Ti) and will encourage participants to use this model in their development.

Eligibility

  • Foreign Nationals and International Developers: All Developers can participate with this exception: residents of, Iran, Cuba, North Korea, Crimea Region of Ukraine, Sudan or Syria or other countries prohibited on the U.S. State Department’s State Sponsors of Terrorism list. In addition, Developers are not eligible to participate if they are on the Specially Designated National list promulgated and amended, from time to time, by the United States Department of the Treasury. It is the responsibility of the Developer to ensure that they are allowed to export their technology solution to the United States for the Live Test. Additionally, it is the responsibility of participants to ensure that no US law export control restrictions would prevent them from participating when foreign nationals are involved. If there are US export control concerns, please contact the organizers and we will attempt to make reasonable accommodations if possible.

  • If you are entering as a representative of a company, educational institution or other legal entity, or on behalf of your employer, these rules are binding on you, individually, and/or the entity you represent or are an employee. If you are acting within the scope of your employment, as an employee, contractor, or agent of another party, you warrant that such party has full knowledge of your actions and has consented thereto, including your potential receipt of a prize. You further warrant that your actions do not violate your employer’s or entity’s policies and procedures.

  • The organizers reserve the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the prize challenge concerning your identity, email address, ownership of right, or information required for entering the prize challenge, you may be immediately disqualified from the challenge.

  • Individual Account. You may make submissions only under one, unique registration. You will be disqualified if you make submissions through more than one registration. You may submit up to 3 submissions (one per challenge), containing at most 3 algorithms per submission. Any submissions that does not adhere to this will be disqualified.

The organizers reserve the right to disqualify any participating team for any of the reasons mentioned above and if deemed necessary.

Warranty, indemnity and release

You warrant that your Submission is your own original work and, as such, you are the sole and exclusive owner and rights holder of the Submission, and you have the right to make the Submission and grant all required licenses. You agree not to make any Submission that: (i) infringes any third party proprietary rights, intellectual property rights, industrial property rights, personal or moral rights or any other rights, including without limitation, copyright, trademark, patent, trade secret, privacy, publicity or confidentiality obligations; or (ii) otherwise violates any applicable state or federal law.

To the maximum extent permitted by law, you indemnify and agree to keep indemnified challenge Entities at all times from and against any liability, claims, demands, losses, damages, costs and expenses resulting from any act, default or omission of the entrant and/or a breach of any warranty set forth herein. To the maximum extent permitted by law, you agree to defend, indemnify and hold harmless the challenge Entities from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorneys fees) arising out of or accruing from: (a) your Submission or other material uploaded or otherwise provided by you that infringes any copyright, trademark, trade secret, trade dress, patent or other intellectual property right of any person or entity, or defames any person or violates their rights of publicity or privacy; (b) any misrepresentation made by you in connection with the challenge; (c) any non-compliance by you with these Rules; (d) claims brought by persons or entities other than the parties to these Rules arising from or related to your involvement with the challenge; and (e) your acceptance, possession, misuse or use of any Prize, or your participation in the challenge and any challenge-related activity.

You hereby release organizers from any liability associated with: (a) any malfunction or other problem with the challenge Website; (b) any error in the collection, processing, or retention of any Submission; or (c) any typographical or other error in the printing, offering or announcement of any Prize or winners.

Footer