TY - JOUR
T1 - Hand gesture recognition framework using a lie group based spatio-temporal recurrent network with multiple hand-worn motion sensors
AU - Wang, Shu
AU - Wang, Aiguo
AU - Ran, Mengyuan
AU - Liu, Li
AU - Peng, Yuxin
AU - Liu, Ming
AU - Su, Guoxin
AU - Alhudhaif, Adi
AU - Alenezi, Fayadh
AU - Alnaim, Norah
N1 - Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2022/8
Y1 - 2022/8
N2 - The primary goal of hand gesture recognition with wearables is to facilitate the realization of gestural user interfaces in mobile and ubiquitous environments. A key challenge in wearable-based hand gesture recognition is the fact that a hand gesture can be performed in several ways, with each consisting of its own configuration of motions and their spatio-temporal dependencies. However, the existing methods generally focus on the characteristics of a single point on hand, but ignores the diversity of motion information over hand skeleton, and as a result, they suffer from two key challenges to characterize hand gestures over multiple wearable sensors: motion representation and motion modeling. This leads us to define a spatio-temporal framework, named STGauntlet, that explicitly characterizes the hand motion context of spatio-temporal relations among multiple bones and detects hand gestures in real-time. In particular, our framework incorporates Lie group-based representation to capture the inherent structural varieties of hand motions with spatio-temporal dependencies among multiple bones. To evaluate our framework, we developed a hand-worn prototype device with multiple motion sensors. Our in-lab study on a dataset collected from nine subjects suggests our approach significantly outperforms the state-of-the-art methods with the achievement of 98.2% and 95.6% average accuracies for subject dependent and independent gesture recognition, respectively. Specifically, we also show in-wild applications that highlight the interaction capability of our framework.
AB - The primary goal of hand gesture recognition with wearables is to facilitate the realization of gestural user interfaces in mobile and ubiquitous environments. A key challenge in wearable-based hand gesture recognition is the fact that a hand gesture can be performed in several ways, with each consisting of its own configuration of motions and their spatio-temporal dependencies. However, the existing methods generally focus on the characteristics of a single point on hand, but ignores the diversity of motion information over hand skeleton, and as a result, they suffer from two key challenges to characterize hand gestures over multiple wearable sensors: motion representation and motion modeling. This leads us to define a spatio-temporal framework, named STGauntlet, that explicitly characterizes the hand motion context of spatio-temporal relations among multiple bones and detects hand gestures in real-time. In particular, our framework incorporates Lie group-based representation to capture the inherent structural varieties of hand motions with spatio-temporal dependencies among multiple bones. To evaluate our framework, we developed a hand-worn prototype device with multiple motion sensors. Our in-lab study on a dataset collected from nine subjects suggests our approach significantly outperforms the state-of-the-art methods with the achievement of 98.2% and 95.6% average accuracies for subject dependent and independent gesture recognition, respectively. Specifically, we also show in-wild applications that highlight the interaction capability of our framework.
KW - Hand gesture recognition
KW - Lie group
KW - Motion modeling
KW - Wearable sensors
UR - https://www.scopus.com/pages/publications/85131137509
U2 - 10.1016/j.ins.2022.05.085
DO - 10.1016/j.ins.2022.05.085
M3 - Article
AN - SCOPUS:85131137509
SN - 0020-0255
VL - 606
SP - 722
EP - 741
JO - Information Sciences
JF - Information Sciences
ER -