Implementasi Ekstraksi Fitur VGG-16 dan Pemodelan LSTM untuk Pembangkitan Caption Gambar Otomatis
Abstract
Image captioning, the task of automatically generating descriptive captions for images, has gained significant attention due to its potential applications in various domains. This paper addresses the challenges associated with integrating computer vision and natural language processing techniques to develop an effective image caption generator. The proposed solution leverages the VGG-16 model for feature extraction from images and an LSTM (Long Short-Term Memory) model for caption generation. The Flickr8k dataset, containing approximately 8000 images with five different captions per image, is utilized for training and evaluation. The methodology encompasses several steps, including data preprocessing, feature extraction, model training, and evaluation. Data preprocessing involves cleaning captions by removing punctuations, single characters, and numerical values, while incorporating start and end sequences. Image features are extracted using the pre-trained VGG-16 model, and similar images are clustered to ensure accurate feature extraction. Subsequently, the captions and corresponding image features are merged and tokenized for model training. The LSTM model is designed with input layers for image features and captions, as well as an output layer for caption generation. Extensive hyperparameter tuning is conducted to optimize the model's performance, involving variations in the number of nodes and layers. The generated captions are evaluated using BLEU scores, where a score closer to 1 indicates higher similarity between predicted and actual captions. The proposed system demonstrates promising results in generating meaningful captions for images, with potential applications in assisting visually impaired individuals, medical image analysis, and advertising industry automation.