Browsing by Author "Kezzal, Chahira"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Deformable Transformer-Based Object Detection for Robust Perception in Autonomous Driving(IEEE, 2025) Kezzal, Chahira; Benderradji, Selsabil; Benlamoudi, Azeddine; Bekhouche, Salah Eddine; Taleb, Abdel; Hadid, AbdenourAutonomous driving demands robust and real-time object detection to safely navigate in complex environments. While Convolutional neural network (CNN)-based detectors have been widely adopted, they face challenges such as limited receptive fields and inefficiencies in handling small or occluded objects. This paper presents a deformable Transformer based object detection framework designed to address these limitations. By leveraging deformable attention mechanisms, the model dynamically focuses on relevant spatial regions, significantly enhancing detection accuracy. Evaluated on the benchmark KITTI dataset, our proposed approach achieves an interesting mAP@50 of 96.6%, surpassing many state-of-the-art methods, at the cost of slower inference speed (7.0 FPS). The experimental results also demonstrate the framework’s superior precision and adaptability in autonomous driving scenarios. This work underscores the potential of deformable transformers to advance perception systems, balancing high accuracy with the demands of real-world applications.Item Efficient Real-Time Multi-Class Object Tracking with YOLO11 and ByteTrack in Real-World Driving Scenes(IEEE, 2025) Benderradji, Selsabil; Kezzal, Chahira; Benlamoudi, Azeddine; Bekhouche, Salah Eddine; Taleb, AbdelAccurate and real-time multi-object tracking (MOT) is essential for autonomous driving systems to ensure safe navigation and decision making in dynamic environments. This paper introduces a tracking-by-detection pipeline that integrates YOLOv11 a high speed, high-accuracy object detector with ByteTrack, a robust data association algorithm capable of lever-aging both high and low confidence detections. The proposed framework addresses key challenges in MOT such as frequent occlusions, fluctuating lighting, and dense traffic by combining efficient detection with motion-consistent identity tracking. Evaluated on the KITTI benchmark, our method demonstrates superior performance across multiple metrics, including HOTA, AssA, and MOTA, for both cars and pedestrians. Additionally, the system achieves an average runtime of 60.4 FPS, supporting its real-time applicability. The results confirm that the proposed YOLOv11 + ByteTrack integration provides a scalable, accurate, and deployment ready solution for complex urban driving scenarios.
