Text Classification
Text classification aims to automatically categorize text into predefined categories, driven by the need for efficient and accurate information processing across diverse domains. Current research focuses on leveraging large language models (LLMs) like BERT and Llama 2, often enhanced with techniques such as fine-tuning, data augmentation, and active learning, alongside traditional machine learning methods like SVMs and XGBoost. These advancements are improving the accuracy and efficiency of text classification, with significant implications for applications ranging from medical diagnosis and financial analysis to social media monitoring and legal research. A key challenge remains ensuring model robustness, interpretability, and fairness, particularly when dealing with imbalanced datasets or noisy labels.
Papers
Ahead of the Text: Leveraging Entity Preposition for Financial Relation Extraction
Stefan Pasch, Dimitrios Petridis
Large Language Model Prompt Chaining for Long Legal Document Classification
Dietrich Trautmann
A Comparative Study on TF-IDF feature Weighting Method and its Analysis using Unstructured Dataset
Mamata Das, Selvakumar K., P. J. A. Alphonse