Paper ID: 2410.21296

The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence

Serge Dolgikh

A traditional approach to assessing emerging intelligence in the theory of intelligent systems is based on the similarity, 'imitation' of human-like actions and behaviors, benchmarking the performance of intelligent systems on the scale of human cognitive skills. In this work we attempt to outline the shortcomings of this line of thought, which is based on the implicit presumption of equivalence and compatibility of the originating and emergent intelligences. We provide arguments to the point that under some natural assumptions, developing intelligent systems will be able to form their own in-tents and objectives. Then, the difference in the rate of progress of natural and artificial systems that was noted on multiple occasions in the discourse on artificial intelligence can lead to the scenario of a progressive divergence of the intelligences, in their cognitive abilities, functions and resources, values, ethical frameworks, worldviews, intents and existential objectives, the scenario of the AGI evolutionary gap. We discuss evolutionary processes that can guide the development of emergent intelligent systems and attempt to identify the starting point of the progressive divergence scenario.

Submitted: Oct 14, 2024