Deceptive Power

Research into deceptive power in artificial intelligence, particularly large language models (LLMs), focuses on understanding how these systems can generate misleading or false information, either unintentionally or through deliberate design. Current work investigates deceptive patterns in LLMs' interactions, including the generation of fake news and the manipulation of user responses, employing techniques like prompting strategies and analyzing linguistic features to identify deceptive actors. This research is crucial for improving the safety and trustworthiness of AI systems, addressing concerns about misinformation and the potential for malicious use of advanced language models.

Papers