New Paper July 2019
Predicting future frames in natural video sequences is a new challenge that is receiving increasing attention in the computer vision community. However, existing models
suffer from severe loss of temporal information when the predicted sequence is long.
Compared to previous methods focusing on generating more realistic contents, this paper
extensively studies the importance of sequential order information for video generation.
A novel Shuffling sEquence gEneration network (SEE-Net) is proposed that can learn
to discriminate unnatural sequential orders by shuffling the video frames and comparing
them to the real video sequence. Systematic experiments on three datasets with both
synthetic and real-world videos manifest the effectiveness of shuffling sequence generation for video prediction in our proposed model and demonstrate state-of-the-art performance by both qualitative and quantitative evaluations. The source code is available at https://github.com/andrewjywang/SEENet
Paper Link: https://arxiv.org/pdf/1907.08845.pdf