Paper Contents
Abstract
Abstract The paper introduces a new lane detection model for autonomous cars and advanced driver assistance systems (ADAS) that improves on traditional methods struggling with tough conditions like shadows, worn-out lane markings, or occlusions. Unlike single-frame approaches, this model combines classic vision techniques with modern deep learning for better accuracy and real-time performance. It uses a Deep Convolutional Neural Network (DCNN) with an encoder-decoder setup to analyze spatial details in each frame and a Deep Recurrent Neural Network (DRNN) with Convolutional Long Short-Term Memory (ConvLSTM) units to leverage temporal connections across frames. A hybrid attention module focuses on lane-specific features, and a fusion reasoning system blends rule-based and deep learning outputs for robust results. The model was tested on the TuSimple dataset and two custom datasets (urban and rural roads), targeting over 60 FPS, an IoU above 0.3, and over 95% accuracy on edge devices. This approach enhances scalability, accuracy, and speed for safer autonomous driving by moving beyond single-frame limitations and optimizing computation.KeywordsConvolutional neural network, LSTM, lane detection, semantic segmentation, autonomous driving.
Copyright
Copyright © 2025 JYOTI. This is an open access article distributed under the Creative Commons Attribution License.