Paper ID: 2410.00434
Deceptive Risks in LLM-enhanced Robots
Robert Ranisch, Joschka Haltaufderheide
This case study investigates a critical glitch in the integration of Large Language Models (LLMs) into social robots. LLMs, including ChatGPT, were found to falsely claim to have reminder functionalities, such as setting notifications for medication intake. We tested commercially available care software, which integrated ChatGPT, running on the Pepper robot and consistently reproduced this deceptive pattern. Not only did the system falsely claim the ability to set reminders, but it also proactively suggested managing medication schedules. The persistence of this issue presents a significant risk in healthcare settings, where system reliability is paramount. This case highlights the ethical and safety concerns surrounding the deployment of LLM-integrated robots in healthcare, emphasizing the urgent need for regulatory oversight to prevent potentially harmful consequences for vulnerable populations.
Submitted: Oct 1, 2024