Future Implication
Research on the implications of future AI technologies focuses on understanding and mitigating the risks and biases inherent in increasingly powerful models, while also exploring their potential benefits across diverse fields. Current work examines the impact of large language models (LLMs) on various tasks, including translation, sentiment analysis, and human activity recognition, investigating issues like memorization, bias propagation, and the effectiveness of different model architectures (e.g., transformer-based models, diffusion models). This research is crucial for ensuring responsible AI development and deployment, informing ethical guidelines, and improving the reliability and fairness of AI systems in both academic and practical applications.
Papers
A Survey of Backdoor Attacks and Defenses on Large Language Models: Implications for Security Measures
Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Xiaobao Wu, Jie Fu, Yichao Feng, Fengjun Pan, Luu Anh Tuan
Implications for Governance in Public Perceptions of Societal-scale AI Risks
Ross Gruetzemacher, Toby D. Pilditch, Huigang Liang, Christy Manning, Vael Gates, David Moss, James W. B. Elsey, Willem W. A. Sleegers, Kyle Kilian