Abstractive Text Summarization of Indonesian News Articles Using Long Short-Term Memory (LSTM)

  • I Gede Surya Mahardika Udayana University
  • Gusti Made Arya Sasmita Udayana university
  • I Nyoman Piarsa Udayana University

Abstract

The digital era is characterized by an increasing number of news articles available online, causing information overload problems for readers. To overcome this problem, this research develops an abstractive text summarization system on Indonesian news articles with the Long Short-Term Memory (LSTM) method with additional FastText word embedding and Attention Layer. The dataset used amounted to 105,588 news article data, which was collected through a web scraping process from the Detik.com news site. The model was developed in sequence-to-sequence architecture (Seq2Seq) and tested in four variations, namely: Basic Seq2Seq LSTM (no additions), LSTM with FastText embedding, LSTM with Attention Layer, and LSTM with FastText and Attention Layer combination. The best model is the LSTM model with Attention Layer using 80:20 data distribution with ROUGE-1 accuracy of 0.5207, ROUGE-2 of 0.4000 and ROUGE-L of 0.4970. The results show that the LSTM model with Attention Layer provides better performance in generating summaries. Thus, this research contributes to the development of abstractive text summarization system in Indonesian language.


Keywords : Abstractive Text Summarization, LSTM, FastText, Attention Layer, ROUGE

References

[1] E. R. Rahmi, E. Yumami, and N. Hidayasari, “Analisis Metode Pengembangan Sistem Informasi Berbasis Website: Systematic Literature Review,” Remik, vol. 7, no. 1, pp. 821–834, 2023, doi: 10.33395/remik.v7i1.12177.
[2] H. Shakil, A. Farooq, and J. Kalita, “Abstractive text summarization: State of the art, challenges, and improvements,” Neurocomputing, vol. 603, 2024, doi: 10.1016/j.neucom.2024.128255.
[3] R. Adelia, S. Suyanto, and U. N. Wisesty, “Indonesian abstractive text summarization using bidirectional gated recurrent unit,” Procedia Comput. Sci., vol. 157, pp. 581–588, 2019, doi: 10.1016/j.procs.2019.09.017.
[4] O. B. Mercan, S. N. Cavsak, A. Deliahmetoglu, and S. Tanberk, “Abstractive Text Summarization for Resumes With Cutting Edge NLP Transformers and LSTM,” 2023. doi: 10.1109/ASYU58738.2023.10296563.
[5] A. Bahari and K. E. Dewi, “Peringkasan Teks Otomatis Abstraktif Menggunakan Transformer Pada Teks Bahasa Indonesia,” Komputa J. Ilm. Komput. dan Inform., vol. 13, no. 1, pp. 83–91, 2024, doi: 10.34010/komputa.v13i1.11197.
[6] I. M. Karo Karo, S. Dewi, and A. Perdana, “Implementasi Text Summarization Pada Review Aplikasi Digital Library System Menggunakan Metode Maximum Marginal Relevance,” JEKIN - J. Tek. Inform., vol. 4, no. 1, pp. 25–31, 2024, doi: 10.58794/jekin.v4i1.671.
[7] A. Nurdin, B. Anggo Seno Aji, A. Bustamin, and Z. Abidin, “Perbandingan Kinerja Word Embedding Word2Vec, Glove, Dan Fasttext Pada Klasifikasi Teks,” J. Tekno Kompak, vol. 14, no. 2, p. 74, 2020, doi: 10.33365/jtk.v14i2.732.
[8] M. Varaprasad Rao, K. Chakma, A. Jamatia, and D. Rudrapal, “An innovative Telugu text summarization framework using the pointer network and optimized attention layer,” Multimed. Tools Appl., pp. 84539–84564, 2024, doi: 10.1007/s11042-024-19187-8.
[9] A. N. Ammar and S. Suyanto, “Peringkasan Teks Ekstraktif Menggunakan Binary Firefly Algorithm,” Indones. J. Comput., vol. 5, no. 2, pp. 31–42, 2020, doi: 10.21108/indojc.2020.5.2.440.
[10] R. Gangundi and R. Sridhar, “IWM-LSTM encoder for abstractive text summarization,” Multimed. Tools Appl., 2024, doi: 10.1007/s11042-024-19091-1.
[11] T. Shi, Y. Keneshloo, N. Ramakrishnan, and C. K. Reddy, “Neural Abstractive Text Summarization with Sequence-to-Sequence Models,” ACM/IMS Trans. Data Sci., vol. 2, no. 1, pp. 1–37, 2021, doi: 10.1145/3419106.
[12] F. N. Puteri, Y. Sibaroni, and F. F., “Hate Speech Detection in Indonesia Twitter Comments Using Convolutional Neural Network (CNN) and FastText Word Embedding,” J. Media Inform. Budidarma, vol. 7, no. 3, p. 1154, 2023, doi: 10.30865/mib.v7i3.6401.
[13] D. Bahdanau, K. H. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15, 2015.
[14] A. Fan, M. Lewis, and Y. Dauphin, “Hierarchical neural story generation,” ACL 2018 - 56th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf. (Long Pap., vol. 1, pp. 889–898, 2018, doi: 10.18653/v1/p18-1082.
[15] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” Adv. Neural Inf. Process. Syst., vol. 4, no. January, pp. 3104–3112, 2014.
[16] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” J. Mach. Learn. Res., vol. 12, pp. 2493–2537, 2011
Published
2025-05-11
How to Cite
MAHARDIKA, I Gede Surya; SASMITA, Gusti Made Arya; PIARSA, I Nyoman. Abstractive Text Summarization of Indonesian News Articles Using Long Short-Term Memory (LSTM). JITTER : Jurnal Ilmiah Teknologi dan Komputer, [S.l.], v. 6, n. 1, p. 2401-2410, may 2025. ISSN 2747-1233. Available at: <https://ojs.unud.ac.id/index.php/jitter/article/view/126431>. Date accessed: 13 may 2025.

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.