No CrossRef data available.
Published online by Cambridge University Press: 21 November 2025
Computer vision-based precision weed control has proven effective in reducing herbicide usage, lowering weed management costs, and enhancing sustainability in modern agriculture. However, developing deep learning models remains challenging due to the effort required for weed dataset annotation and the difficulty of identifying weeds at different stages and densities in complex field conditions. To address these challenges, this study introduces an indirect weed detection method that combines deep learning and image processing techniques. The proposed approach first employs an object detection network to identify and label crops within the images. Subsequently, image processing techniques are applied to segment the remaining green pixels, thereby enabling indirect detection of weeds. Furthermore, a novel detection network–CD-YOLOv10n (You Only Look Once version 10 nano)–was developed based on the YOLOv10 framework to optimize computational efficiency. By redesigning the backbone (C2f-DBB) and integrating an optimized upsampling module (DySample), the network achieved higher detection accuracy while maintaining a lightweight structure. Specifically, the model achieved a mean average precision (mAP50) of 98.1%, which is a 1.4% percentage-point increase compared with the YOLOv10n baseline, a relevant improvement given the already strong baseline performance. At the same time, compared to YOLOv10n, its GFLOPs were reduced by 22.62%, and the number of parameters decreased by 15.87%. These innovations make CD-YOLOv10n highly suitable for deployment on resource-constrained platforms.