TY - JOUR AU - Alberola-López, Carlos AB - Deep learning-based object detection models have become a preferred choice for crop detection tasks in crop monitoring activities due to their high accuracy and generalization capabilities. However, their high computational demand and large memory footprint pose a challenge for use on mobile embedded devices deployed in crop monitoring settings. Various approaches have been taken to minimize the computational cost and reduce the size of object detection models such as channel and layer pruning, detection head searching, backbone optimization, etc. In this work, we approached computational lightening, model compression, and speed improvement by discarding one or more of the three detection scales of the YOLOv5 object detection model. Thus, we derived up to five separate fast and light models, each with only one or two detection scales. To evaluate the new models for a real crop monitoring use case, the models were deployed on NVIDIA Jetson nano and NVIDIA Jetson Orin devices. The new models achieved up to 21.4% reduction in giga floating-point operations per second (GFLOPS), 31.9% reduction in number of parameters, 30.8% reduction in model size, 28.1% increase in inference speed, with only a small average accuracy drop of 3.6%. These new models are suitable for crop detection tasks since the crops are usually of similar sizes due to the high likelihood of being in the same growth stage, thus, making it sufficient to detect the crops with just one or two detection scales. TI - Simplifying YOLOv5 for deployment in a real crop monitoring setting JF - Multimedia Tools and Applications DO - 10.1007/s11042-023-17435-x DA - 2024-05-01 UR - https://www.deepdyve.com/lp/springer-journals/simplifying-yolov5-for-deployment-in-a-real-crop-monitoring-setting-XgVA3Itk6a SP - 50197 EP - 50223 VL - 83 IS - 17 DP - DeepDyve ER -