Multi-modal remote perception learning for object sensory data

Nouf Abdullah Almujally, Adnan Ahmed Rafique, Naif Al Mudawi, Abdulwahab Alazeb, Mohammed Alonazi, Asaad Algarni, Ahmad Jalal, Hui Liu

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

Introduction: When it comes to interpreting visual input, intelligent systems make use of contextual scene learning, which significantly improves both resilience and context awareness. The management of enormous amounts of data is a driving force behind the growing interest in computational frameworks, particularly in the context of autonomous cars. Method: The purpose of this study is to introduce a novel approach known as Deep Fused Networks (DFN), which improves contextual scene comprehension by merging multi-object detection and semantic analysis. Results: To enhance accuracy and comprehension in complex situations, DFN makes use of a combination of deep learning and fusion techniques. With a minimum gain of 6.4% in accuracy for the SUN-RGB-D dataset and 3.6% for the NYU-Dv2 dataset. Discussion: Findings demonstrate considerable enhancements in object detection and semantic analysis when compared to the methodologies that are currently being utilized.

Original languageEnglish
Article number1427786
JournalFrontiers in Neurorobotics
Volume18
DOIs
StatePublished - 2024

Keywords

  • multi-modal
  • objects recognition
  • sensory data
  • simulation environment
  • simulation environment multi-modal
  • visionary sensor

Fingerprint

Dive into the research topics of 'Multi-modal remote perception learning for object sensory data'. Together they form a unique fingerprint.

Cite this