Doesn’t the Turing test work?

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

The Turing test was developed by the scientist Alan Turing and involves an experience where a participant interacts simultaneously with a computer and a living person. Based on the answers received to their questions, the participant must determine with whom they converse: a human or a machine. If the individual could not distinguish them, it was considered that the machine managed to “pass” the test.

However, this test, formerly considered innovative, now has its limits. It mainly focuses on imitation of human reactions rather than real human reasoning. Many models of artificial intelligence excel in imitation of conversation styles but often lack deep mental capacities. This does not require the AI ​​that they have self -awareness or understand your own reasoning. Even killing itself has recognized that this test cannot really determine if the machines may think; It is more a question of imitation than cognition.

Previously, we explored the question of GPT-4 passing the Turing test and the results obtained from such an experience. You can read the article here.

To meet the aforementioned limits of the Turing test, Philip N. Johnson-Laird from Princeton University and Marco Ragni from Chemnitz University of Technology have developed an alternative to well-known test. They propose to move the attention to know if a machine can imitate human reactions to a more fundamental question: “Does AI reason in the same way as humans?”

Their published article Describes a new evaluation system, the aim of which is to determine whether the AI ​​of reasons like a human. This framework consists of three crucial stages.

1. Test the program in a series of psychological reasoning experiences.

The first step is to conduct a series of psychological experiences among AI models intended to distinguish human thought from standard logical processes. These experiences immerse themselves in various aspects of reasoning, exploring nuances that depart from standard logical frames.

If machine judgments differ from human judgments, we answered the previous question. The computer reasons differently from humans. However, if his judgments align considerably with human reasoning, we go to the second step.

2. Test the understanding of the program for your own reasoning process.

This step aims to assess the understanding of AI of its own reasoning processes, a critical aspect of human cognition. Ideally, the machine should be able to analyze its reasoning and provide explanations to its decisions, resembling self-analysis similar to human behavior.

If the program succeeds in this test, the third step is analytical.

3. Examine the source code of the program.

The last step is to study the source code of the program. If it contains the same fundamental components known for modeling human activity, including an intuitive system for rapid deductions, a deliberative system for more thoughtful reasoning and a system to interpret terms based on context and common knowledge, these evidence is crucial. If the source code of the program reflects these principles, it is considered a reason to be a human.

Considering AI as a participant in cognitive experiences, this innovative approach means a paradigm shift in the evaluation of artificial intelligence. By submitting the IT code to the analysis, scientists offer a reassessment of AI assessment standards. While the world continues to target more sophisticated artificial intelligence, this new concept could be a significant step in our understanding of how machines think.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.