Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
CmdCaliper: A Semantic-Aware Command-Line Embedding Model and Dataset for Security Research
Sian-Yao Huang, Cheng-Lin Yang, Che-Yu Lin, Chun-Ying Huang
Infant Agent: A Tool-Integrated, Logic-Driven Agent with Cost-Effective API Usage
Bin Lei, Yuchen Li, Yiming Zeng, Tao Ren, Yi Luo, Tianyu Shi, Zitian Gao, Zeyu Hu, Weitai Kang, Qiuwu Chen
How Effective Is Self-Consistency for Long-Context Problems?
Adam Byerly, Daniel Khashabi
Emoji Attack: A Method for Misleading Judge LLMs in Safety Risk Detection
Zhipeng Wei, Yuqi Liu, N. Benjamin Erichson
Privacy Risks of Speculative Decoding in Large Language Models
Jiankun Wei, Abdulrahman Abdulrazzag, Tianchen Zhang, Adel Muursepp, Gururaj Saileshwar
AttackQA: Development and Adoption of a Dataset for Assisting Cybersecurity Operations using Fine-tuned and Open-Source LLMs
Varun Badrinath Krishna
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
Marcos Macedo, Yuan Tian, Pengyu Nie, Filipe R. Cogo, Bram Adams
FedDTPT: Federated Discrete and Transferable Prompt Tuning for Black-Box Large Language Models
Jiaqi Wu, Simin Chen, Yuzhe Yang, Yijiang Li, Shiyue Hou, Rui Jing, Zehua Wang, Wei Chen, Zijian Tian
SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models
Jianyi Zhang, Da-Cheng Juan, Cyrus Rashtchian, Chun-Sung Ferng, Heinrich Jiang, Yiran Chen
Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling
Yiwen Ding, Zhiheng Xi, Wei He, Zhuoyuan Li, Yitao Zhai, Xiaowei Shi, Xunliang Cai, Tao Gui, Qi Zhang, Xuanjing Huang
LLMs: A Game-Changer for Software Engineers?
Md Asraful Haque
MolCap-Arena: A Comprehensive Captioning Benchmark on Language-Enhanced Molecular Property Prediction
Carl Edwards, Ziqing Lu, Ehsan Hajiramezanali, Tommaso Biancalani, Heng Ji, Gabriele Scalia
Leveraging Large Language Models for Code-Mixed Data Augmentation in Sentiment Analysis
Linda Zeng
ReverseNER: A Self-Generated Example-Driven Framework for Zero-Shot Named Entity Recognition with Large Language Models
Anbang Wang
Active Preference-based Learning for Multi-dimensional Personalization
Minhyeon Oh, Seungjoon Lee, Jungseul Ok
Multi-expert Prompting Improves Reliability, Safety, and Usefulness of Large Language Models
Do Xuan Long, Duong Ngoc Yen, Anh Tuan Luu, Kenji Kawaguchi, Min-Yen Kan, Nancy F. Chen
Improving Few-Shot Cross-Domain Named Entity Recognition by Instruction Tuning a Word-Embedding based Retrieval Augmented Large Language Model
Subhadip Nandi, Neeraj Agrawal
Chat Bankman-Fried: an Exploration of LLM Alignment in Finance
Claudia Biancotti, Carolina Camassa, Andrea Coletta, Oliver Giudice, Aldo Glielmo
E2E-AFG: An End-to-End Model with Adaptive Filtering for Retrieval-Augmented Generation
Yun Jiang, Zilong Xie, Wei Zhang, Yun Fang, Shuai Pan
Self-Evolved Reward Learning for LLMs
Chenghua Huang, Zhizhen Fan, Lu Wang, Fangkai Yang, Pu Zhao, Zeqi Lin, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang