Transformer Based LLM
Transformer-based large language models (LLMs) are a class of deep learning models designed to process and generate human-like text, with applications ranging from cybersecurity to materials science. Current research focuses on improving efficiency through techniques like low-rank parameterization and binarization, as well as enhancing capabilities in long-context processing and mitigating privacy risks via improved membership inference attack defenses. These advancements aim to reduce computational costs while maintaining or improving performance across various tasks, impacting fields requiring natural language understanding and generation.
Papers
November 5, 2024
September 9, 2024
September 5, 2024
August 14, 2024
July 13, 2024
July 9, 2024
July 1, 2024
June 24, 2024
June 4, 2024
April 2, 2024
February 27, 2024
February 14, 2024
November 21, 2023
October 24, 2023
October 12, 2023
September 13, 2023
April 13, 2023
August 5, 2022
April 25, 2022