Leveraging Generative AI: Improving Software Metadata Classification with Generated Code-Comment Pairs. (arXiv:2311.03365v1 [cs.SE])
In software development, code comments play a crucial role in enhancing code
comprehension and collaboration. This research paper addresses the challenge of
objectively classifying code comments as “Useful” or “Not Useful.” We propose a
novel solution that harnesses contextualized embeddings, particularly BERT, to
automate this classification process. We address this task by incorporating
generated code and comment pairs. The initial dataset comprised 9048 pairs of
code and comments written in C, labeled as either Useful or Not Useful. To
augment this dataset, we sourced an additional 739 lines of code-comment pairs
and generated labels using a Large Language Model Architecture, specifically
BERT. The primary objective was to build classification models that can
effectively differentiate between useful and not useful code comments. Various
machine learning algorithms were employed, including Logistic Regression,
Decision Tree, K-Nearest Neighbors (KNN), Support Vector Machine (SVM),
Gradient Boosting, Random Forest, and a Neural Network. Each algorithm was
evaluated using precision, recall, and F1-score metrics, both with the original
seed dataset and the augmented dataset. This study showcases the potential of
generative AI for enhancing binary code comment quality classification models,
providing valuable insights for software developers and researchers in the
field of natural language processing and software engineering.
Source: https://arxiv.org/abs/2311.03365