TY - JOUR
T1 - STGNN
T2 - a new self-trained generalized neural networks task space control for robot manipulators
AU - Elmogy, Ahmed
AU - Elawady, Wael
AU - El-Ghaish, Hany
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.
PY - 2025/7
Y1 - 2025/7
N2 - The inherent uncertainties in the dynamic and kinematic parameters of robot manipulators pose significant challenges for their control in task space. This paper introduces an innovative adaptive sliding mode control strategy, leveraging self-trained generalized neural networks to enhance the precision and robustness of robot manipulator control. The proposed framework called STGNN integrates deep neural networks for the real-time estimation and adjustment of dynamics’ and kinematics’ parameters. Key contributions of this work include the development of semi-supervised learning models for adaptive dynamics and kinematics estimation. By training the generalized neural network on different trajectories, the same models can be utilized to operate on a variety of tasks, significantly improving the robustness, scalability, and adaptability of the control system. The STGNN approach allows the models to leverage both labeled and unlabeled data, incorporating a self-training mechanism that enhances their ability to generalize and adapt to diverse operational scenarios. This capability is particularly beneficial for real-time applications, where the ability to learn and adapt on the fly is crucial. Simulation results demonstrate the superior performance of the STGNN framework in controlling robot manipulators in operational space, showcasing its potential for real-world applications. The proposed method significantly advances the adaptability, accuracy, and efficiency of robot control systems, offering notable improvements over conventional approaches.
AB - The inherent uncertainties in the dynamic and kinematic parameters of robot manipulators pose significant challenges for their control in task space. This paper introduces an innovative adaptive sliding mode control strategy, leveraging self-trained generalized neural networks to enhance the precision and robustness of robot manipulator control. The proposed framework called STGNN integrates deep neural networks for the real-time estimation and adjustment of dynamics’ and kinematics’ parameters. Key contributions of this work include the development of semi-supervised learning models for adaptive dynamics and kinematics estimation. By training the generalized neural network on different trajectories, the same models can be utilized to operate on a variety of tasks, significantly improving the robustness, scalability, and adaptability of the control system. The STGNN approach allows the models to leverage both labeled and unlabeled data, incorporating a self-training mechanism that enhances their ability to generalize and adapt to diverse operational scenarios. This capability is particularly beneficial for real-time applications, where the ability to learn and adapt on the fly is crucial. Simulation results demonstrate the superior performance of the STGNN framework in controlling robot manipulators in operational space, showcasing its potential for real-world applications. The proposed method significantly advances the adaptability, accuracy, and efficiency of robot control systems, offering notable improvements over conventional approaches.
KW - Adaptive control
KW - Deep neural networks
KW - Robot manipulator
KW - Semi-supervised learning
KW - Sliding mode control
KW - Task space control
UR - http://www.scopus.com/inward/record.url?scp=105004755825&partnerID=8YFLogxK
U2 - 10.1007/s00521-025-11204-7
DO - 10.1007/s00521-025-11204-7
M3 - Article
AN - SCOPUS:105004755825
SN - 0941-0643
VL - 37
SP - 14427
EP - 14452
JO - Neural Computing and Applications
JF - Neural Computing and Applications
IS - 19
ER -