Domain Specific
Domain-specific adaptation of large language models (LLMs) focuses on enhancing their performance and reliability within specialized fields by overcoming limitations stemming from data scarcity and domain-specific terminology. Current research emphasizes developing effective methods for data curation, including synthetic data generation and techniques like knowledge distillation to transfer knowledge from domain-specific to general-purpose models, alongside novel architectures like graph-oriented databases for improved performance and maintenance. This work is crucial for broadening the applicability of LLMs to diverse sectors, improving efficiency in areas like finance, healthcare, and scientific research, and addressing concerns about bias and hallucination in sensitive domains.
Papers
Virchow2: Scaling Self-Supervised Mixed Magnification Models in Pathology
Eric Zimmermann, Eugene Vorontsov, Julian Viret, Adam Casson, Michal Zelechowski, George Shaikovski, Neil Tenenholtz, James Hall, David Klimstra, Razik Yousfi, Thomas Fuchs, Nicolo Fusi, Siqi Liu, Kristen Severson
Downstream bias mitigation is all you need
Arkadeep Baksi, Rahul Singh, Tarun Joshi
Building a Domain-specific Guardrail Model in Production
Mohammad Niknazar, Paul V Haley, Latha Ramanan, Sang T. Truong, Yedendra Shrinivasan, Ayan Kumar Bhowmick, Prasenjit Dey, Ashish Jagmohan, Hema Maheshwari, Shom Ponoth, Robert Smith, Aditya Vempaty, Nick Haber, Sanmi Koyejo, Sharad Sundararajan
MathViz-E: A Case-study in Domain-Specialized Tool-Using Agents
Arya Bulusu, Brandon Man, Ashish Jagmohan, Aditya Vempaty, Jennifer Mari-Wyka, Deepak Akkil