Kekarangan Balinese Carving Classification Using Gabor Convolutional Neural Network
Abstract
Balinese traditional carvings are Balinese culture that can easily be found on the island of Bali, starting from the decoration of Hindu temples and traditional Balinese houses. One of the types of Balinese traditional carving ornaments is Kekarangan ornament carving. Apart from the many traditional Balinese carvings, Balinese people only know the shape of the carving without knowing the name and characteristics of the carving itself. Lack of understanding in traditional Balinese carving is caused by the difficulty of finding sources of materials to study traditional Balinese carvings. A traditional Kekarangan Balinese carving classification system can help Balinese people to identify classes of traditional Balinese carving. This study used the Gabor CNN method. The Multi Orientation Gabor Filter is used in feature extraction and image augmentation, coupled with the Convolutional Neural Network method for image classification. The usage of the Gabor CNN method can produce the highest image classification accuracy of 89%.
Downloads
References
characteristics using wavelet method and k-nearest neighbor (k-nn),” Lontar Komputer :
Jurnal Ilmiah Teknologi Informasi, vol. 12, no. 1, pp. 41–52, 2021. [Online]. Available:
https://ojs.unud.ac.id/index.php/lontar/article/view/70638
[2] R. Chauhan, K. K. Ghanshala, and R. C. Joshi, “Convolutional neural network (cnn) for
image detection and recognition,” in 2018 First International Conference on Secure Cyber
Computing and Communication (ICSCCC), Dec 2018, pp. 278–282.
[3] I. M. A. Maha, “Pengenalan pola motif ukiran bali menggunakan histogram of oriented gradient (hog) dan learning vector quantization (lvq),” Master’s thesis, Institut Teknologi Surabaya,
2017.
[4] Sumantara, Agung Bayupati, and Wirdiani, “Rancang Bangun Aplikasi Pengenalan Ukiran
Bali dengan Metode ORB,” Jurnal Ilmiah Merpati, 2017.
[5] Antara Kesiman, Aditra Pradnyana, and Mahayana Putra, “IDENTIFIKASI CITRA UKIRAN
ORNAMEN TRADISIONAL BALI DENGAN METODE MULTILAYER PERCEPTRON,” SINTECH, 2021.
[6] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document
recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov 1998.
[7] Yann LeCun, Corinna Cortes, and Christopher J.C. Burges, “The mnist database of handwritten digits,” http://yann.lecun.com/exdb/mnist/, 2013, accessed: 2019-12-3.
[8] L. Taylor and G. Nitschke, “Improving deep learning with generic data augmentation,” in 2018
IEEE Symposium Series on Computational Intelligence (SSCI), Nov 2018, pp. 1542–1547.
[9] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional
neural networks,” Neural Information Processing Systems, vol. 25, 01 2012.
[10] Connor Shorten and Taghi M. Khoshgoftaar , “A survey on Image Data Augmentation for
Deep Learning,” Journal of Big Data, Jul 2019.
[11] H. Yao, L. Chuyi, H. Dan, and Y. Weiyu, “Gabor feature based convolutional neural network
for object recognition in natural scene,” in 2016 3rd International Conference on Information
Science and Control Engineering (ICISCE), July 2016, pp. 386–390.
[12] J. . Kamarainen, V. Kyrki, and H. Kalviainen, “Invariance properties of gabor filter-based
features-overview and applications,” IEEE Transactions on Image Processing, vol. 15, no. 5,
pp. 1088–1099, May 2006.
[13] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A Survey on Deep Transfer Learning,”
arXiv e-prints, p. arXiv:1808.01974, Aug 2018.
[14] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang,
“Convolutional neural networks for medical image analysis: Full training or fine tuning?” IEEE
Transactions on Medical Imaging, vol. 35, no. 5, pp. 1299–1312, May 2016.
[15] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image
Recognition,” arXiv e-prints, p. arXiv:1409.1556, Sep 2014.
[16] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich, “Going Deeper with Convolutions,” arXiv e-prints, p. arXiv:1409.4842, Sep
2014.
[17] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and
H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv e-prints, p. arXiv:1704.04861, Apr 2017.
[18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv
e-prints, p. arXiv:1512.03385, Dec 2015.
[19] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and
H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,”
CoRR, vol. abs/1704.04861, 2017. [Online]. Available: http://arxiv.org/abs/1704.04861
[20] M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Information Processing & Management, vol. 45, no. 4, pp. 427 – 437, 2009
The Authors submitting a manuscript do so on the understanding that if accepted for publication, the copyright of the article shall be assigned to Jurnal Lontar Komputer as the publisher of the journal. Copyright encompasses exclusive rights to reproduce and deliver the article in all forms and media, as well as translations. The reproduction of any part of this journal (printed or online) will be allowed only with written permission from Jurnal Lontar Komputer. The Editorial Board of Jurnal Lontar Komputer makes every effort to ensure that no wrong or misleading data, opinions, or statements be published in the journal.
This work is licensed under a Creative Commons Attribution 4.0 International License.