Le Zhang, Joey Tianyi Zhou, Ming-Ming Cheng, Yun Liu, Jia-Wang Bian, Zeng Zeng, Chunhua Shen
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 4, pp. 1460-1466, doi: 10.1109/TPAMI.2020.2976969.
Publication year: 2020

Is recurrent network really necessary for learning a good visual representation for video based person re-identification (VPRe-id)? In this paper, we first show that the common practice of employing recurrent neural networks (RNNs) to aggregate temporal- spatial features may not be optimal. Specifically, with a diagnostic analysis, we show that the recurrent structure may not be effective learn temporal dependencies than what we expected and implicitly yields an orderless representation. Based on this observation, we then present a simple yet surprisingly powerful approach for VPRe-id, where we treat VPRe-id as an efficient orderless ensemble of image based person re-identification problem. More specifically, we divide videos into individual images and re-identify person with ensemble of image based rankers. Under the i.i.d. assumption, we provide an error bound that sheds light upon how could we improve VPRe-id. Our work also presents a promising way to bridge the gap between video and image based person re-identification. Comprehensive experimental evaluations demonstrate that the proposed solution achieves state-of-the-art performances on multiple widely used datasets (iLIDS-VID, PRID 2011, and MARS). A Chinese version  of the paper (论文中文版) can be fount at : https://mmcheng.net/wp-content/uploads/2021/09/TPAMI2020_video_reid_zh.pdf     Project page is available at https://github.com/ZhangLeUestc/VideoReid-TPAMI2020 

发表评论

邮箱地址不会被公开。 必填项已用*标注