TY - JOUR
T1 - Deep learning-powered visual place recognition for enhanced mobile multimedia communication in autonomous transport systems
AU - M, Roopa Devi E.
AU - Abirami, T.
AU - Dutta, Ashit Kumar
AU - Alsubai, Shtwai
N1 - Publisher Copyright:
© 2024
PY - 2024/12
Y1 - 2024/12
N2 - The progress of autonomous transport systems (ATS) involves efficient multimedia communication for real-time data tradeoffs and environmental issues. Deep learning (DL) powered visual place recognition (VPR) was developed as an effective tool to improve mobile multimedia communication in ATS. VPR relates to the capability of a method or device to recognize and identify particular places or locations from the visual scene. This procedure involves inspecting visual data, like images or video frames, to control the unique features or features connected with diverse locations. By leveraging camera sensors, VPR allows vehicles to detect their surroundings, enabling context-aware communication and enhancing the entire system's performance. DL-empowered VPR offers a transformative manner to improve mobile multimedia communication in ATS. By identifying and understanding their situation, autonomous vehicles can communicate most effectively and operate reliably and safely, paving the way for a future characterized by seamless and intelligent transportation. This article develops a novel Deep Learning-Powered Visual Place Recognition for Enhanced Multimedia Communication in Autonomous Transport Systems (DLVPR-MCATS) methodology. The main aim of the DLVPR-MCATS methodology is to recognize visual places or not utilize optimal DL approaches. For this purpose, the DLVPR-MCATS approach utilizes a bilateral filtering (BF) based preprocessing model. For the feature fusion model, the DLVPR-MCATS approach follows three models: residual network (ResNet), EfficientNet, and MobileNetv2. Moreover, the hyperparameter tuning method uses the Harris Hawks Optimization (HHO) model. Finally, the bidirectional long short-term memory (BiLSTM) technique is implemented to recognize visual places. A wide range of simulations is executed to validate the solution of the DLVPR-MCATS method. The experimental validation of the DLVPR-MCATS method portrayed a superior performance over other models concerning various aspects.
AB - The progress of autonomous transport systems (ATS) involves efficient multimedia communication for real-time data tradeoffs and environmental issues. Deep learning (DL) powered visual place recognition (VPR) was developed as an effective tool to improve mobile multimedia communication in ATS. VPR relates to the capability of a method or device to recognize and identify particular places or locations from the visual scene. This procedure involves inspecting visual data, like images or video frames, to control the unique features or features connected with diverse locations. By leveraging camera sensors, VPR allows vehicles to detect their surroundings, enabling context-aware communication and enhancing the entire system's performance. DL-empowered VPR offers a transformative manner to improve mobile multimedia communication in ATS. By identifying and understanding their situation, autonomous vehicles can communicate most effectively and operate reliably and safely, paving the way for a future characterized by seamless and intelligent transportation. This article develops a novel Deep Learning-Powered Visual Place Recognition for Enhanced Multimedia Communication in Autonomous Transport Systems (DLVPR-MCATS) methodology. The main aim of the DLVPR-MCATS methodology is to recognize visual places or not utilize optimal DL approaches. For this purpose, the DLVPR-MCATS approach utilizes a bilateral filtering (BF) based preprocessing model. For the feature fusion model, the DLVPR-MCATS approach follows three models: residual network (ResNet), EfficientNet, and MobileNetv2. Moreover, the hyperparameter tuning method uses the Harris Hawks Optimization (HHO) model. Finally, the bidirectional long short-term memory (BiLSTM) technique is implemented to recognize visual places. A wide range of simulations is executed to validate the solution of the DLVPR-MCATS method. The experimental validation of the DLVPR-MCATS method portrayed a superior performance over other models concerning various aspects.
KW - Autonomous transport systems
KW - Bilateral filtering
KW - Deep learning
KW - Hyperparameter tuning
KW - Visual place recognition
UR - https://www.scopus.com/pages/publications/85205442296
U2 - 10.1016/j.aej.2024.09.060
DO - 10.1016/j.aej.2024.09.060
M3 - Article
AN - SCOPUS:85205442296
SN - 1110-0168
VL - 109
SP - 950
EP - 962
JO - Alexandria Engineering Journal
JF - Alexandria Engineering Journal
ER -