Accurate weed identification in farmland is crucial for enhancing intelligent weeding precision. This study focuses on weeds in maize seedling fields and builds an accurate identification model using the Faster-RCNN deep-learning algorithm. An image database is created, and the VGG-16 network extracts labeled datasets for weed feature extraction. By calculating random candidate region scores, a neural network training model is established to determine weed positions and types. This model achieves an average accuracy of 81.25% and an identification rate of 94.3% in weed identification. To test the model’s performance in the field, it is evaluated under different conditions, such as lighting, field of view, and occlusion. Occlusion has the most significant impact on the identification rate. Without occlusion, the precision is 94.4%, dropping to 79.2% when the occlusion rate exceeds 50%. However, adjusting the shooting angle can increase the precision to 97.1%. In real-world conditions, considering all factors, the weed identification precision is 94.3%. The results show that this technology is highly adaptable in the field with fast image detection. With GPU acceleration, the average detection speed per image is 50 milliseconds, and the video stream can reach 20 frames per second. This technology can adapt to complex environments, detect accurately, and has a short calculation time. It provides key support for automated mechanical weeding and holds great promise for practical applications in agricultural production.
The National Bureau of Statistics of China, 2018, China Statistical Yearbook 2018, China Statistical Publishing House, Beijing.
Wang C, Li Z, 2016, Weed Recognition Using SVM Model with Fusion Height and Monocular Image Features. Transactions of the Chinese Society of Agricultural Engineering, 32(15): 165–174.
Partel V, Charan Kakarla S, Ampatzidis Y, 2019, Development and Evaluation of a Low-cost and Smart Technology for Precision Weed Management Utilizing Artificial Intelligence. Computers and Electronics in Agriculture, 157: 339–350.
Barrero O, Perdomo SA, 2018, RGB and Multispectral UAV Image Fusion for Gramineae Weed Detection in Rice Fields. Precision Agriculture, 19(5): 809–822.
Louargant M, Villette S, Jones G, et al., 2017, Weed Detection by UAV: Simulation of the Impact of Spectral Mixing in Multispectral Images. Precision Agriculture, 18(6): 1–20 + 932–951.
Ren S, Girshick R, Girshick R, et al., 2017, Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis & Machine Intelligence, 39(6): 1137–1149.
Barth R, Ijsselmuiden J, Hemming J, et al., 2019, Synthetic Bootstrapping of Convolutional Neural Networks for Semantic Plant Part Segmentation. Computers and Electronics in Agriculture, 161: 291–304.
Fu L, Feng Y, Majeed Y, et al., 2018, Kiwifruit Detection in Field Images Using Faster R-CNN with ZFNet. IFAC-Papers Online, 51(17): 45–50.
Zhen X, Chen J, Zhong Z, et al., 2017, Deep Convolutional Neural Network with Transfer Learning for Rectum Toxicity Prediction in Cervical Cancer Radiotherapy: A Feasibility Study. Physics in Medicine and Biology, 62(21): 8246–8263.
Zheng Y, Zhu Q, Huang M, et al., 2017, Maize and Weed Classification Using Color Indices with Support Vector Data Description in Outdoor Fields. Computers and Electronics in Agriculture, 141: 215–222.
Lottes P, Hoeferlin M, Sander S, et al., 2016, Effective Vision-based Classification for Separating Sugar Beets and Weeds for Precision Farming. Journal of Field Robotics, 34(6): 1160–1178.