Hebrew Text
Research on Hebrew text focuses on overcoming challenges posed by its rich morphology and relatively limited digital resources compared to languages like English. Current efforts center on developing and adapting large language models (LLMs), such as BERT-based architectures, for various tasks including text summarization, part-of-speech tagging, and machine translation, often incorporating explicit morphological knowledge into model training. These advancements are crucial for expanding natural language processing capabilities to under-resourced languages and enabling new applications in digital humanities, information extraction, and other fields.
Papers
November 17, 2022
August 11, 2022
August 3, 2022
May 31, 2022