Abstract
The rapid growth in autonomous vehicle technology has demanded
accurate and efficient object detection systems. The traditional deep
learning models have delivered strong detection accuracy; however,
they often required heavy computational resources, which made real-
time deployment difficult in embedded automotive platforms. The gap
between speed and accuracy has created a challenge, especially in
dynamic driving environments where the detection delay may risk
safety. This study investigated a lightweight real-time detection
framework based on improved YOLO variants optimized for low-power
environments. The method used knowledge distillation, structure
pruning, and feature compression to reduce redundant layers that
which do not contribute to final prediction accuracy. A quantization-
aware training approach was integrated to enhance efficiency on
embedded hardware. Transfer learning was adopted using a pre-
trained YOLOv5s backbone, followed by fine-tuning using an
annotated autonomous driving dataset. The experimental results
indicate that the proposed lightweight model achieves faster inference
with higher accuracy. The optimized network processes live video
frames at 47 FPS and maintains a mean average precision of 95
percent. The model records a precision of 96 percent and a recall of 94
percent, which surpasses Faster R-CNN, SSD, and Tiny-YOLO
baselines. The inference time reduces to 19 ms on embedded hardware,
which confirms suitability for real-time autonomous driving perception.
Authors
Belwin J. Brearley1, K. Regin Bose2, N. Kanagavalli3
B.S. Abdur Rahman Crescent Institute of Science and Technology, India1, Rajalakshmi Institute of Technology, India2,3
Keywords
YOLO, Autonomous Vehicles, Object Detection, Lightweight Model, Real-Time Detection