Real Power
Real power in artificial intelligence research currently centers on understanding and leveraging the capabilities of large language models (LLMs) for various tasks, moving beyond traditional fine-tuning methods towards more efficient approaches like in-context learning. Research focuses on improving LLMs' performance through techniques such as self-prompting, exploring novel architectures like autoregressive decision trees and incorporating external knowledge sources to enhance reasoning and reduce hallucinations. These advancements have significant implications for diverse fields, including natural language processing, computer vision, and scientific discovery, by enabling more efficient and effective solutions to complex problems.
Papers
Harnessing the Power of Artificial Intelligence to Vitalize Endangered Indigenous Languages: Technologies and Experiences
Claudio Pinhanez, Paulo Cavalin, Luciana Storto, Thomas Finbow, Alexander Cobbinah, Julio Nogima, Marisa Vasconcelos, Pedro Domingues, Priscila de Souza Mizukami, Nicole Grell, Majoí Gongora, Isabel Gonçalves
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks
Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Farhan Ahmed, Ling Cai, Nathalie Baracaldo
Limits and Powers of Koopman Learning
Matthew J. Colbrook, Igor Mezić, Alexei Stepanenko
InverseCoder: Unleashing the Power of Instruction-Tuned Code LLMs with Inverse-Instruct
Yutong Wu, Di Huang, Wenxuan Shi, Wei Wang, Lingzhe Gao, Shihao Liu, Ziyuan Nan, Kaizhao Yuan, Rui Zhang, Xishan Zhang, Zidong Du, Qi Guo, Yewen Pu, Dawei Yin, Xing Hu, Yunji Chen
On the Power of Convolution Augmented Transformer
Mingchen Li, Xuechen Zhang, Yixiao Huang, Samet Oymak
The Power of LLM-Generated Synthetic Data for Stance Detection in Online Political Discussions
Stefan Sylvius Wagner, Maike Behrendt, Marc Ziegele, Stefan Harmeling
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning
Yi Liu, Cong Wang, Xingliang Yuan