Paper ID: 2405.13352

"Turing Tests" For An AI Scientist

Xiaoxin Yin

While LLMs have shown impressive capabilities in solving math or coding problems, the ability to make scientific discoveries remains a distinct challenge. This paper proposes a "Turing test for an AI scientist" to assess whether an AI agent can conduct scientific research independently, without relying on human-generated knowledge. Drawing inspiration from the historical development of science, we propose seven benchmark tests that evaluate an AI agent's ability to make groundbreaking discoveries in various scientific domains. These tests include inferring the heliocentric model from celestial observations, discovering the laws of motion in a simulated environment, deriving the differential equation governing vibrating strings, inferring Maxwell's equations from electrodynamics simulations, inventing numerical methods for initial value problems, discovering Huffman coding for data compression, and developing efficient sorting algorithms. To ensure the validity of these tests, the AI agent is provided with interactive libraries or datasets specific to each problem, without access to human knowledge that could potentially contain information about the target discoveries. The ultimate goal is to create an AI scientist capable of making novel and impactful scientific discoveries, surpassing the best human experts in their respective fields. These "Turing tests" serve as intermediate milestones, assessing the AI agent's ability to make discoveries that were groundbreaking in their time. If an AI agent can pass the majority of these seven tests, it would indicate significant progress towards building an AI scientist, paving the way for future advancements in autonomous scientific discovery. This paper aims to establish a benchmark for the capabilities of AI in scientific research and to stimulate further research in this exciting field.

Submitted: May 22, 2024