LGO-YOLO: A Lightweight Generalized Optimization of YOLOv8 for On-device Object Detection
LGO-YOLO: A Lightweight Generalized Optimization of YOLOv8 for On-device Object Detection
- 한국인터넷방송통신학회
- International journal of advanced smart convergence
- Vol.14No.2
-
2025.0160 - 68 (9 pages)
- 0
On-device AI environments require real-time processing but are constrained by limited computational resources. Previous studies have shown that simply replacing high-cost computational modules with low-cost alternatives does not always yield actual speed improvements on embedded hardware. Therefore, this study aims to design a YOLOv8-n-based lightweight network that can achieve real-time inference and high accuracy under stringent resource constraints. The proposed model, LGO-YOLO, applies module structures optimized for embedded computation to both the Backbone and Neck, reducing the model's computational load and number of parameters by approximately 42% and 40%, respectively. Despite these reductions, the model achieves accuracy and precision equal to or superior to YOLOv8-n in several performance metrics-most notably, an mAP@0.5 of 99.3%. Furthermore, in an NPU environment, it records the fastest inference time (25.4 ms) among all comparison models. This work demonstrates how careful structural design can balance the limits of model lightweighting with performance requirements, indicating that the proposed network can be effectively deployed in real embedded systems or other low-power application scenarios.
(0)
(0)