Challenge 2 (Action Recognition in the Dark) Datasets
Track 2: Action Recognition from Dark Videos
About the Dataset
The ARID dataset is a dataset dedicated for the action recognition task in dark videos (without additional sensors such as IR sensor). It is the first of its kind to our knowledge. For this track, we structure it into two sub-challenges. Each challenge features different actions and diverse training protocols.
- Dataset and baseline report: Arxiv
- Note that for this challenge track we have updated the dataset described in the report. The updated dataset contains more scenarios but with the same amount of classes.
Training & Evaluation
In all two sub-challenges, the participant teams are allowed to use external training data that are not mentioned above, including self-synthesized or self-collected data; but they must state so in their submissions ("Method description" section in Codalab). The ranking criteria will be the Top-1 Accuracy on each testing set.
Sub-Challenge 2.1: Fully Supervised Action Recognition in the Dark
Participants are to perform action recognition in dark videos in a fully supervised manner. We provide a total of 1,837 videos capturing actions performed by volunteers in the dark as the main training and/or validation sets. These actions include a total of 6 classes (running, sitting, standing, turning, walking and waving). The labels of these training videos are all provided in a single .csv file. For evaluation, a final action recognition test would be performed on a hold-out test set with 1289 videos. The hold-out test set contain the same classes as the provided training set.
- Paper: ArXiv
- Release Date: June, 2020
- Download: Sub-Challenge 2.1 Training and Dry-run (Validation) Data
- Download NEW!: Sub-Challenge 2.1 Testing Data (pwd:ug2@cvpr2021)
- Codalab: Codalab Link
- Reference Code: Reference Github Repository
Sub-Challenge 2.2: Semi-supervised Action Recognition in the Dark
Participants are expected to perform action recognition in dark videos in a semi-supervised manner. We provide a a subset of the HMDB51 dataset that includes 643 videos from 5 classes (drink, jump, pick, pour and push), all of which with labels provided. To boost the effectiveness of the approaches on real dark videos, we provide another unlabeled set of 1,513 videos with the same 5 classes from the ARID dataset, which might be used at the participants’ discretization. Note that these 1,513 videos are strictly PROHIBITED to be used by manually labeling the videos. The final action recognition test would be performed on a hold-out test set of real dark videos from the ARID dataset with 722 videos. The hold-out test set contain the same classes as the provided training set.
- Paper: ArXiv
- Release Date: June, 2020
- Download: Sub-Challenge 2.2 Training and Dry-run (Validation) Data
- Download NEW!: Sub-Challenge 2.2 Testing Data (pwd:ug2@cvpr2021)
- Codalab: Codalab Link
- Reference Code: Reference Github Repository
If you have any questions about this challenge track please feel free to email cvpr2021.ug2challenge@gmail.com
References:
[1] Xu, Y., Yang, J., Cao, H., Mao, K., Yin, J. and See, S., 2020. ARID: A New Dataset for Recognizing Action in the Dark. arXiv preprint arXiv:2006.03876.
[2] Jhuang, H., Garrote, H., Poggio, E., Serre, T. and Hmdb, T., 2011, November. A large video database for human motion recognition. In Proc. of IEEE International Conference on Computer Vision (Vol. 4, No. 5, p. 6).