Skeleton–Silhouette Complementary Perception: Toward Robust Gait Recognition

  • Xiaokai Liu Information Science and Technology College, Dalian Maritime University, Dalian 116026, Liaoning Province, China
  • Luyuan Hao Information Science and Technology College, Dalian Maritime University, Dalian 116026, Liaoning Province, China
Keywords: Complementary perception, Gait recognition, Feature fusion

Abstract

Gait, the unique pattern of how a person walks, has emerged as one of the most promising biometric features in modern intelligent sensing. Unlike fingerprints or facial characteristics, gait can be captured unobtrusively and at a distance, without requiring the subject’s awareness or cooperation. This makes it highly suitable for long-range surveillance, forensic investigation, and smart environments where contactless recognition is crucial. Traditional gait-recognition systems rely either on silhouettes, which capture the outer appearance of a person, or on skeletons, which describe the internal structure of human motion. Each modality provides only a partial understanding of gait. Silhouettes emphasize shape and contour but are easily distorted by clothing or carried objects; skeletons describe motion dynamics and limb coordination but lose discriminative details about body shape. This article presents the concept of Complementary Semantic Embedding (CSE), a unified framework that merges silhouette and skeleton information into a comprehensive semantic representation of human walking. By modeling the complementary nature of appearance and structure, the approach achieves more robust and accurate gait recognition even under challenging conditions.

References

Han J, Bhanu B, 2006, Individual Recognition Using Gait Energy Image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(2): 316–322.

Wu Z, Huang Y, Wang L, ET AL., 2017, A Comprehensive Study on Cross-View Gait Based Human Identification With Deep CNNs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(2): 209–226.

Cao Z, Hidalgo G, Simon T, et al., 2021, OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1): 172–186.

Yu S, Chen H, Wang Q, et al., 2017, Invariant Feature Extraction for Gait Recognition Using Only One Uniform Model. Neurocomputing, 239: 81–93.

Yu S, Chen H, Reyes E, et al., 2017, GaitGAN: Invariant Gait Feature Extraction Using Generative Adversarial Networks. IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017: 532–539.

Yu S, Liao R, An W, et al., 2019, GaitGANv2: Invariant Gait Feature Extraction Using Generative Adversarial Networks. Pattern Recognition, 2019: 179–189.

He Y, Zhang J, Shan H, et al., 2019, Multi-Task GANs for View-Specific Feature Learning in Gait Recognition. IEEE Transactions on Information Forensics and Security, 2019: 102–113.

Liao R, Yu S, An W, et al., 2019, A Model-Based Gait Recognition Method With Body Pose and Human Prior Knowledge. Pattern Recognition, 98: 107–169.

Published
2025-12-12