Latin Text
Research on Latin text focuses on developing computational methods for analyzing and processing this historical language, primarily to improve access to and understanding of vast historical corpora. Current efforts leverage deep learning models, such as Transformers (including BERT and RoBERTa variations), and other neural network architectures for tasks like handwriting recognition, morphological analysis, and cross-lingual semantic comparison with Greek and modern languages. These advancements facilitate historical linguistic research, improve the accessibility of historical documents, and offer new tools for digital humanities scholarship.
Papers
Comparative Analysis of Static and Contextual Embeddings for Analyzing Semantic Changes in Medieval Latin Charters
Yifan Liu, Gelila Tilahun, Xinxiang Gao, Qianfeng Wen, Michael Gervers
Sui Generis: Large Language Models for Authorship Attribution and Verification in Latin
Gleb Schmidt, Svetlana Gorovaia, Ivan P. Yamshchikov