Driving Perception in Challenging Road Scenarios: An Empirical Study

Mourad A. Kenk, Mona Elsaidy, M. Hassaballah, Mahmoud B.A. Mansour

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

Vision-based road lane detection is a critical technology for autonomous driving, enabling vehicles to navigate safely and efficiently under constrained conditions such as accurately identifying the lane markings and tracking the vehicle's position. However, despite exceeding 90% detection recall in large scale datasets, existing lane detection methods often fail in some real- world scenarios such as adverse weather, intensive shadows, complex road types, and challenging lighting conditions. This study highlights these gaps and proposes potential research areas to address them. To this end, the YOLO-based lane detection algorithm is utilized as a case study for determining potential perception problems under complex traffic situations.

Original languageEnglish
Title of host publication2023 20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023 - Proceedings
PublisherIEEE Computer Society
ISBN (Electronic)9798350319439
DOIs
StatePublished - 2023
Externally publishedYes
Event20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023 - Giza, Egypt
Duration: 4 Dec 20237 Dec 2023

Publication series

NameProceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA
ISSN (Print)2161-5322
ISSN (Electronic)2161-5330

Conference

Conference20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023
Country/TerritoryEgypt
CityGiza
Period4/12/237/12/23

Keywords

  • Adverse Weather
  • Autonomous Driving
  • Lane Detection
  • Lane Markings
  • Safe Navigation
  • Vehicle Detection

Fingerprint

Dive into the research topics of 'Driving Perception in Challenging Road Scenarios: An Empirical Study'. Together they form a unique fingerprint.

Cite this