Kezzal, ChahiraBenderradji, SelsabilBenlamoudi, AzeddineBekhouche, Salah EddineTaleb, AbdelHadid, Abdenour2026-03-042025DOI: 10.1109/ICSPIS67605.2025.11318379https://dspace.univ-boumerdes.dz/handle/123456789/16180Autonomous driving demands robust and real-time object detection to safely navigate in complex environments. While Convolutional neural network (CNN)-based detectors have been widely adopted, they face challenges such as limited receptive fields and inefficiencies in handling small or occluded objects. This paper presents a deformable Transformer based object detection framework designed to address these limitations. By leveraging deformable attention mechanisms, the model dynamically focuses on relevant spatial regions, significantly enhancing detection accuracy. Evaluated on the benchmark KITTI dataset, our proposed approach achieves an interesting mAP@50 of 96.6%, surpassing many state-of-the-art methods, at the cost of slower inference speed (7.0 FPS). The experimental results also demonstrate the frameworkâs superior precision and adaptability in autonomous driving scenarios. This work underscores the potential of deformable transformers to advance perception systems, balancing high accuracy with the demands of real-world applications.enObject DetectionConvolutional Neural NetworkImage edge detectionDeformable Transformer-Based Object Detection for Robust Perception in Autonomous DrivingArticle