Action Concept Grounding Network for Semantically-Consistent Video Generation

Preprint. Under review.


Authors Anonymous.  


[Paper]
[GitHub]
[Bibtex]



Abstract

Recent works in self-supervised video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the problem of semantic learning. We introduce the task of semantic action-conditional video prediction, which can be regarded as an inverse problem of action recognition. The challenge of this new task primarily lies in how to effectively inform the model of semantic action information. To bridge vision and language, we utilize the idea of capsule and propose a novel video prediction model Action Concept Grounding Network (ACGN). Our method is evaluated on two newly designed synthetic datasets, CLEVR-Building-Blocks and Sapien-Kitchen, and experiments show that given different action labels, our ACGN can correctly condition on instructions and generate corresponding future frames without need of bounding boxes. We further demonstrate our trained model can make out-of-distribution predictions for concurrent actions, be quickly adapted to new object categories and exploit its learnt features for object detection.



New task:   Semantic Action-conditional Video Prediction








Our Solution: Action Concept Grounding Network (ACGN)






Qualitative Results on CLEVR-Building-blocks

Ground truth                    Predictions

Ground truth                    Predictions




Qualitative Results on Sapien-Kitchen

Ground truth                    Predictions

Ground truth                    Predictions




Quantitative Comparison






Counterfactual Generations






Acknowledgements

To be added after review period.