Classification Of Rice Plant Diseases Using the Convolutional Neural Network Method
Abstract
Indonesia is one of the countries with the population majority of farming. The agricultural sector in Indonesia is supported by fertile land and a tropical climate. Rice is one of the agricultural sectors in Indonesia. Rice production in Indonesia has decreased every year. Thus, rice production factors are very significant. Rice disease is one of the factors causing the decline in rice production in Indonesia. Technological developments have made it easier to recognize the types of rice plant diseases. Machine learning is one of the technologies used to identify types of rice diseases. The classification system of rice plant disease used the Convolutional Neural Network method. Convolutional Neural Network (CNN) is a machine learning method used in object recognition. This method applies to the VGG19 architecture, which has features to improve results. The image used as training and test data consists of 105 images, divided into training and test images. Parameter testing using epoch variations and data augmentation. The research results obtained a test accuracy of 95.24%.
Downloads
References
[2] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” International Journal Computer Vision., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
[3] Mushtaq Adnan, Karol Ali, and G. Drushti, “Plant Disease Detection using CNN & Remedy,” pp. 622–626, 2019, doi: 10.15662/IJAREEIE.2019.0803014.
[4] H. B. Prajapati, J. P. Shah, and V. K. Dabhi, “Detection and classification of rice plant diseases,” Intelligent Decision Technology., vol. 11, no. 3, pp. 357–373, 2017, doi: 10.3233/IDT-170301.
[5] M. Mehdipour Ghazi, B. Yanikoglu, and E. Aptoula, “Plant identification using deep neural networks via optimization of transfer learning parameters,” Neurocomputing, vol. 235, no. August 2016, pp. 228–235, 2017, doi: 10.1016/j.neucom.2017.01.018.
[6] R. Kamble and D. Shah, “Applications of Artificial Intelligence in Human Life,” International Journal Research -GRANTHAALAYAH, vol. 6, no. 6, pp. 178–188, 2018, doi: 10.29121/granthaalayah.v6.i6.2018.1363.
[7] Y. Adiwinata, A. Sasaoka, I. P. Agung Bayupati, and O. Sudana, “Fish Species Recognition with Faster R-CNN Inception-v2 using QUT FISH Dataset,” Lontar Komputer : Jurnal Ilmiah Teknolologi Informasi., vol. 11, no. 3, p. 144, 2020, doi: 10.24843/lkjiti.2020.v11.i03.p03.
[8] S. Sakib, Ahmed, A. Jawad, J. Kabir, and H. Ahmed, “An Overview of Convolutional Neural Network: Its Architecture and Applications,” ResearchGate, no. November, 2018, doi: 10.20944/preprints201811.0546.v1.
[9] I. M. Mika Parwita and D. Siahaan, “Classification of Mobile Application Reviews using Word Embedding and Convolutional Neural Network,” Lontar Kompututer : Jurnal Ilmiah Teknologi Informasi., vol. 10, no. 1, p. 1, 2019, doi: 10.24843/lkjiti.2019.v10.i01.p01.
[10] S. K. G. Manikonda and D. N. Gaonkar, “A Novel Islanding Detection Method Based on Transfer Learning Technique Using VGG16 Network,” 1st IEEE International Conference Sustainable Energy Technologies and System. ICSETS 2019, vol. 6, pp. 109–114, 2019, doi: 10.1109/ICSETS.2019.8744778.
[11] C. G. Pachón-Suescún, J. O. Pinzón-Arenas, and R. Jiménez-Moreno, “Detection of scratches on cars by means of CNN and R-CNN,” International Journal Advanced Science, Engineering and Information Technology., vol. 9, no. 3, pp. 745–752, 2019, doi: 10.18517/ijaseit.9.3.6470.
[12] S. Arivazhagan and S. V. Ligi, “Mango Leaf Diseases Identification Using Convolutional Neural Network,” International Journal of Pure and Applied Mathematics., vol. 120, no. 6, pp. 11067–11079, 2018.
[13] D. Jaswal, S. V, and K. P. Soman, “Image Classification Using Convolutional Neural Networks,” International Journal of Scientific and Engineering Research., vol. 5, no. 6, pp. 1661–1668, 2014, doi: 10.14299/ijser.2014.06.002.
[14] M. Sayed and F. Baker, “Thermal face authentication with Convolutional Neural Network,” Journal of Computer Science., vol. 14, no. 12, pp. 1627–1637, 2018, doi: 10.3844/jcssp.2018.1627.1637.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings IEEE Computer Society Conference on Computer Vision Pattern Recognition., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
[16] M. A. H. Abas, N. Ismail, A. I. M. Yassin, and M. N. Taib, “VGG16 for plant image classification with transfer learning and data augmentation,” International Journal of Engineering and Technology (UAE)., vol. 7, no. 4, pp. 90–94, 2018, doi: 10.14419/ijet.v7i4.11.20781.
[17] K. K. Lai, “An Integrated Data Preparation Scheme for Neural Network Data Analysis,” IEEE Transactions on Knowledge and Data Engineering., vol. 18, no. 2, pp. 217–230, 2006, doi: 10.1109/TKDE.2006.22.
[18] C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” Journal of Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0197-0.
[19] A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” 2018 International Interdisciplinary PhD Workshop IIPhDW 2018, no. May, pp. 117–122, 2018, doi: 10.1109/IIPHDW.2018.8388338.
[20] A. P. Parente, M. B. de Souza, A. Valdman, and R. O. Mattos Folly, “Data augmentation applied to machine learning-based monitoring of a pulp and paper process,” Processes, vol. 7, no. 12, 2019, doi: 10.3390/PR7120958.
The Authors submitting a manuscript do so on the understanding that if accepted for publication, the copyright of the article shall be assigned to Jurnal Lontar Komputer as the publisher of the journal. Copyright encompasses exclusive rights to reproduce and deliver the article in all forms and media, as well as translations. The reproduction of any part of this journal (printed or online) will be allowed only with written permission from Jurnal Lontar Komputer. The Editorial Board of Jurnal Lontar Komputer makes every effort to ensure that no wrong or misleading data, opinions, or statements be published in the journal.
This work is licensed under a Creative Commons Attribution 4.0 International License.