Fine-Tuned RetinaNet for Real-Time Lettuce Detection

  • Eko Wahyu Prasetyo Universitas Merdeka Malang
  • Hidetaka Nambo Kakumamachi, Kanazawa, Ishikawa, Jepang

Abstract

The agricultural industry plays a vital role in the global demand for food production. Along with population growth, there is an increasing need for efficient farming practices that can maximize crop yields. Conventional methods of harvesting lettuce often rely on manual labor, which can be time-consuming, labor-intensive, and prone to human error. These challenges lead to research into automation technology, such as robotics, to improve harvest efficiency and reduce reliance on human intervention. Deep learning-based object detection models have shown impressive success in various computer vision tasks, such as object recognition. RetinaNet model can be trained to identify and localize lettuce accurately. However, the pre-trained models must be fine-tuned to adapt to the specific characteristics of lettuce, such as shape, size, and occlusion, to deploy object recognition models in real-world agricultural scenarios. Fine-tuning the models using lettuce-specific datasets can improve their accuracy and robustness for detecting and localizing lettuce. The data acquired for RetinaNet has the highest accuracy of 0.782, recall of 0.844, f1-score of 0.875, and mAP of 0,962. Metrics evaluate that the higher the score, the better the model performs.

Downloads

Download data is not yet available.

References

[1] D. Mittal, G. Kaur, P. Singh, K. Yadav, and S. A. Ali, “Nanoparticle-Based Sustainable Agriculture and Food Science: Recent Advances and Future Outlook,” Frontiers in Nanotechnology, vol. 2. Frontiers Media S.A., Dec. 04, 2020. doi: 10.3389/fnano.2020.579954.
[2] D. I. Pomoni, M. K. Koukou, M. G. Vrachopoulos, and L. Vasiliadis, “A Review of Hydroponics and Conventional Agriculture Based on Energy and Water Consumption, Environmental Impact, and Land Use,” Energies, vol. 16, no. 4. MDPI, Feb. 01, 2023. doi: 10.3390/en16041690.
[3] S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar, and B. Lee, “A survey of modern deep learning based object detection models,” Digital Signal Processing: A Review Journal, vol. 126. Elsevier Inc., Jun. 30, 2022. doi: 10.1016/j.dsp.2022.103514.
[4] B. H. Husain and T. Osawa, “Advancing Fauna Conservation through Machine Learning-Based Spectrogram Recognition: A Study on Object Detection using YOLOv5,” Jurnal Sumberdaya Alam dan Lingkungan, vol. 10, no. 2, pp. 58–68, Aug. 2023, doi: 10.21776/ub.jsal.2023.010.02.2.
[5] W. Syechu, B. B. Nasution, and M. S. Effendi, “CONVOLUTIONAL NEURAL NETWORK OPTIMIZATION FOR DEEP WEEDS,” Sinkron, vol. 8, no. 1, pp. 268–274, Jan. 2023, doi: 10.33395/sinkron.v8i1.12046.
[6] F. Xiao, H. Wang, Y. Xu, and R. Zhang, “Fruit Detection and Recognition Based on Deep Learning for Automatic Harvesting: An Overview and Review,” Agronomy, vol. 13, no. 6. MDPI, Jun. 01, 2023. doi: 10.3390/agronomy13061625.
[7] R. Yang et al., “Detection of abnormal hydroponic lettuce leaves based on image processing and machine learning,” Information Processing in Agriculture, vol. 10, no. 1, pp. 1–10, Mar. 2023, doi: 10.1016/j.inpa.2021.11.001.
[8] S. Birrell, J. Hughes, J. Y. Cai, and F. Iida, “A field-tested robotic harvesting system for iceberg lettuce,” Journal of Field Robotics, vol. 37, no. 2, pp. 225–245, Mar. 2020, doi: 10.1002/rob.21888.
[9] K. Osorio, A. Puerto, C. Pedraza, D. Jamaica, and L. Rodríguez, “A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images,” AgriEngineering, vol. 2, no. 3, pp. 471–488, Sep. 2020, doi: 10.3390/agriengineering2030032.
[10] M. Li et al., “AlignDet: Aligning Pre-training and Fine-tuning in Object Detection,” Jul. 2023, [Online]. Available: http://arxiv.org/abs/2307.11077
[11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Jun. 2015, [Online]. Available: http://arxiv.org/abs/1506.01497
[12] M. Fey and J. E. Lenssen, “Fast Graph Representation Learning with PyTorch Geometric,” Mar. 2019, [Online]. Available: http://arxiv.org/abs/1903.02428
[13] A. Paszke et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” Dec. 2019, [Online]. Available: http://arxiv.org/abs/1912.01703
[14] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal Loss for Dense Object Detection,” Aug. 2017, [Online]. Available: http://arxiv.org/abs/1708.02002
[15] A. G. Hochuli, A. S. Britto, D. A. Saji, J. M. Saavedra, R. Sabourin, and L. S. Oliveira, “A comprehensive comparison of end-to-end approaches for handwritten digit string recognition,” Expert Syst Appl, vol. 165, Mar. 2021, doi: 10.1016/j.eswa.2020.114196.
[16] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” Dec. 2016, [Online]. Available: http://arxiv.org/abs/1612.03144
[17] A. Badithela, T. Wongpiromsarn, and R. M. Murray, “Evaluation Metrics for Object Detection for Autonomous Systems,” Oct. 2022, [Online]. Available: http://arxiv.org/abs/2210.10298
[18] P. Singh, N. Singh, K. K. Singh, and A. Singh, “Chapter 5 - Diagnosing of disease using machine learning,” in Machine Learning and the Internet of Medical Things in Healthcare, K. K. Singh, M. Elhoseny, A. Singh, and A. A. Elngar, Eds., Academic Press, 2021, pp. 89–111. doi: https://doi.org/10.1016/B978-0-12-821229-5.00003-3.
[19] F. S. Nahm, “Receiver operating characteristic curve: overview and practical use for clinicians,” Korean Journal of Anesthesiology, vol. 75, no. 1, pp. 25–36, Feb. 2022, doi: 10.4097/kja.21209.
[20] H. R. Sofaer, J. A. Hoeting, and C. S. Jarnevich, “The area under the precision-recall curve as a performance metric for rare binary events,” Methods in Ecology and Evolution, vol. 10, no. 4, pp. 565–577, Apr. 2019, doi: 10.1111/2041-210X.13140.
[21] N. Tatbul, T. J. Lee, S. Zdonik, M. Alam, and J. Gottschlich, “Precision and Recall for Time Series,” Mar. 2018, [Online]. Available: http://arxiv.org/abs/1803.03639
[22] D. Zhou et al., “IoU Loss for 2D/3D Object Detection,” in Proceedings - 2019 International Conference on 3D Vision, 3DV 2019, Institute of Electrical and Electronics Engineers Inc., Sep. 2019, pp. 85–94. doi: 10.1109/3DV.2019.00019.
[23] S. Salman and X. Liu, “Overfitting Mechanism and Avoidance in Deep Neural Networks,” Jan. 2019, [Online]. Available: http://arxiv.org/abs/1901.06566
[24] J. Muschelli, “ROC and AUC with a Binary Predictor: a Potentially Misleading Metric,” J Classif, vol. 37, no. 3, pp. 696–708, Oct. 2020, doi: 10.1007/s00357-019-09345-1.
[25] A. Prosperi, P. A. Korswagen, M. Korff, R. Schipper, and J. G. Rots, “Empirical fragility and ROC curves for masonry buildings subjected to settlements,” Journal of Building Engineering, vol. 68, Jun. 2023, doi: 10.1016/j.jobe.2023.106094.
[26] E. W. Prasetyo, N. Hidetaka, D. A. Prasetya, W. Dirgantara, and H. F. Windi, “Spatial Based Deep Learning Autonomous Wheel Robot Using CNN,” Lontar Komputer : Jurnal Ilmiah Teknologi Informasi, vol. 11, no. 3, p. 167, Dec. 2020, doi: 10.24843/lkjiti.2020.v11.i03.p05.
[27] W. Kurdthongmee, K. Suwannarat, and C. Wattanapanich, “A Framework to Estimate the Key Point Within an Object Based on a Deep Learning Object Detection,” HighTech and Innovation Journal, vol. 4, no. 1, pp. 106–121, Mar. 2023, doi: 10.28991/HIJ-2023-04-01-08.
[28] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int J Comput Vis, vol. 88, no. 2, pp. 303–338, Jun. 2010, doi: 10.1007/s11263-009-0275-4.
[29] T. T. Santos, L. L. de Souza, A. A. dos Santos, and S. Avila, “Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association,” Comput Electron Agric, vol. 170, Mar. 2020, doi: 10.1016/j.compag.2020.105247.
[30] A. Beznosikov, E. Gorbunov, H. Berard, and M. Diro, “Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods,” 2023.
Published
2024-03-25
How to Cite
PRASETYO, Eko Wahyu; NAMBO, Hidetaka. Fine-Tuned RetinaNet for Real-Time Lettuce Detection. Lontar Komputer : Jurnal Ilmiah Teknologi Informasi, [S.l.], v. 15, n. 1, p. 13-25, mar. 2024. ISSN 2541-5832. Available at: <https://ojs.unud.ac.id/index.php/lontar/article/view/109624>. Date accessed: 21 nov. 2024. doi: https://doi.org/10.24843/LKJITI.2024.v15.i01.p02.

Most read articles by the same author(s)