Menu
Loading...

Timetable

  • January 31, 2019: Development kit and registration made available
  • March 15 - April 15, 2019: Dry run period**
  • April 1, 2019: Registration deadline
  • May 1, 2019: Submissions deadline
  • May 20, 2019: Challenge results will be released
  • June 16, 2019: Most successful and innovative teams present at CVPR 2019 workshop
** Every contestant will have the opportunity to verify before the submission window closes that their Docker container submission works. This dry run is completely optional but is highly recommended.

Prizes

UG2+ evaluates algorithms for enhancement of images/videos at scale. The most successful and innovative teams will be invited to present at the CVPR 2019 workshop. We provide three sub-challenge categories:

  1. Object detection in poor-visibility environments:
    1. (Semi-)supervised object detection in the haze
      • 1st Place: $2K
      • 2nd Place: $1K
    2. (Semi-)supervised Face detection in the low light condition
      • 1st Place: $2K
      • 2nd Place: $1K
    3. (Zero-shot object detection with raindrop occlusions
      • 1st Place: $2K
      • 2nd Place: $1K
For a total of $9K awarded in prizes.

Submission Process

To submit your results to our leaderboard, follow these steps:

  1. Download the training/validation datasets and the Development Kit (DevKit - Undergoing testing and to be finalized soon).
  2. Train or finetune your model and run your trained model on the validation set. You can use the DevKits to evaluate the validation performance.
  3. Please make sure that:
    • Your algorithms can take all images from the “/root/UG2/Sub_challenge2_1/input/ ” directory as input
    • You output the results into “/root/UG2/Sub_challenge2_1/output/userid/output/submission_#/” (# refers to the number of submission). For Challenge 2, the output will be a file that has the same prefix name to the input file and “.txt” as its postfix with several lines to denote the bounding box locations as its content as following:
      • 531 189 573 240
      • 269 193 284 206
  4. Pack all your code into a docker, and make sure the organizers can run “/root/UG2/ Sub_challenge2_1/test_submission_#.sh” and can generate the output files with the provided algorithm based on the given input files. Also, a README file explaining how your algorithms work and how to change the running behavior of the algorithms are welcomed.
  5. Upload your dock to Docker Hub and submit your docker link to the corresponding sub challenge. Check your uploaded docker with the testing code provided in our DevKits to make sure that the uploaded docker can work correctly. Note that, the basic docker should inherit from our provided docker template, whose docker link will be provided in DevKits. The organizers will not guarantee to test with the dockers that are built from other templates.
  6. Each team may submit up to 3 algorithms per sub challenge. All submissions (per sub challenge) will be held within one single Docker container to be uploaded to Docker Hub. The Docker container will contain all dependencies and code required to perform the model’s operation and will execute the model(s) contained upon run.

Development Kit

The Development Kit consists of a Docker file containing the basic structure for the algorithm submission, as well as instructions on how to pull and run a supplemental quantitative classification module for the second challenge. The images, annotation, and lists specifying the training/validation sets for the challenge are provided separately, and can be obtained via: Google Drive.

Each team must submit one algorithm for each challenge they wish to participate in. Participants who have investigated several algorithms may submit up to 3 algorithms per challenge. All submissions (per challenge) will be held within one single Docker container to be uploaded to Docker Hub. The Docker container will container all dependencies and code required to perform the model’s operation and will execute the model(s) contained upon run.

The input images will be provided to the container at run time through Docker’s mounting option, as will the output folders for the model(s) to save their results. Each model must be run on all images contained within the input folder and must save the new images to their respective output folder locations, without any name changes or missing images.

Requirements

Software
  • Docker-CE
  • NVIDIA Docker
  • CUDA 9.0
  • CuDNN v7.0
Docker

Our provided Docker will include the following software

  • Ubuntu 16.04
  • CUDA 8.0/9.0
  • CUDNN v7
  • Pytorch (recommended)
  • Tensorflow
Hardware

The proposed algorithms should be able to run in systems with:

  • Up to and including Titan Xp 12 GB
  • Up to and including 12 cores
  • Up to and including 32gb memory

If you have any questions about this challenge track please feel free to email ug2challenge.2@gmail.com

Rules

Read carefully the following guidelines before submitting. Methods not complying with the guidelines will be disqualified.

  • We encourage participants to use the provided training and validation data for each task, as well as to make use of their own data or data from other sources for training. However the use any form of annotation or use of any of the provided benchmarks test sets for either supervised or unsupervised training is strictly forbidden.
  • Participants who have investigated several algorithms may submit up to 3 algorithms per challenge. Only a single submission per participant can be the winner of a single sub-challenge. Changes in algorithm parameters do not constitute a different method, all parameter tuning must be conducted using the dataset provided and any additional data the participants consider appropriate.
  • If an algorithm is submitted that makes inefficient use of deep learning, we may not be able to process the entire test set due to time constraints. To avoid this possibility, we will place limits on per-image runtime.
  • Results can vary based on the GPU used. We noticed that results varied slightly across GPU models. We are restricting our evaluation to one model (TitanXP) and will encourage participants to use this model in their development.

Eligibility

  • Foreign Nationals and International Developers: All Developers can participate with this exception: residents of, Iran, Cuba, North Korea, Crimea Region of Ukraine, Sudan or Syria or other countries prohibited on the U.S. State Department’s State Sponsors of Terrorism list. In addition, Developers are not eligible to participate if they are on the Specially Designated National list promulgated and amended, from time to time, by the United States Department of the Treasury. It is the responsibility of the Developer to ensure that they are allowed to export their technology solution to the United States for the Live Test. Additionally, it is the responsibility of participants to ensure that no US law export control restrictions would prevent them from participating when foreign nationals are involved. If there are US export control concerns, please contact the organizers and we will attempt to make reasonable accommodations if possible.

  • If you are entering as a representative of a company, educational institution or other legal entity, or on behalf of your employer, these rules are binding on you, individually, and/or the entity you represent or are an employee. If you are acting within the scope of your employment, as an employee, contractor, or agent of another party, you warrant that such party has full knowledge of your actions and has consented thereto, including your potential receipt of a prize. You further warrant that your actions do not violate your employer’s or entity’s policies and procedures.

  • The organizers reserve the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the prize challenge concerning your identity, email address, ownership of right, or information required for entering the prize challenge, you may be immediately disqualified from the challenge.

  • Individual Account. You may make submissions only under one, unique registration. You will be disqualified if you make submissions through more than one registration. You may submit up to 3 submissions (one per challenge), containing at most 3 algorithms per submission. Any submissions that does not adhere to this will be disqualified.

The organizers reserve the right to disqualify any participating team for any of the reasons mentioned above and if deemed necessary.

Warranty, indemnity and release

You warrant that your Submission is your own original work and, as such, you are the sole and exclusive owner and rights holder of the Submission, and you have the right to make the Submission and grant all required licenses. You agree not to make any Submission that: (i) infringes any third party proprietary rights, intellectual property rights, industrial property rights, personal or moral rights or any other rights, including without limitation, copyright, trademark, patent, trade secret, privacy, publicity or confidentiality obligations; or (ii) otherwise violates any applicable state or federal law.

To the maximum extent permitted by law, you indemnify and agree to keep indemnified challenge Entities at all times from and against any liability, claims, demands, losses, damages, costs and expenses resulting from any act, default or omission of the entrant and/or a breach of any warranty set forth herein. To the maximum extent permitted by law, you agree to defend, indemnify and hold harmless the challenge Entities from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorneys fees) arising out of or accruing from: (a) your Submission or other material uploaded or otherwise provided by you that infringes any copyright, trademark, trade secret, trade dress, patent or other intellectual property right of any person or entity, or defames any person or violates their rights of publicity or privacy; (b) any misrepresentation made by you in connection with the challenge; (c) any non-compliance by you with these Rules; (d) claims brought by persons or entities other than the parties to these Rules arising from or related to your involvement with the challenge; and (e) your acceptance, possession, misuse or use of any Prize, or your participation in the challenge and any challenge-related activity.

You hereby release organizers from any liability associated with: (a) any malfunction or other problem with the challenge Website; (b) any error in the collection, processing, or retention of any Submission; or (c) any typographical or other error in the printing, offering or announcement of any Prize or winners.

Competition Framework

The Participant has requested permission to use the dataset as compiled by Texas A&M University, Peking University, and University of Chinese Academy of Sciences. In exchange for such permission, Participant hereby agrees to the following terms and conditions:

  • Texas A&M University, Peking University, and University of Chinese Academy of Sciences make no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose.

  • Pre-trained models are allowed in the competition.

  • Participants are not restricted to train their algorithms on the provided training set. Collecting and training on additional data is encouraged.

Frequently asked questions

  • FAQ list will be continuously maintained and updated based on incoming questions: FAQ list


Sponsors

Footer