
Can computers think? Can AI models be aware? These questions and similarities often appear in discussions on recent AI progress, made by GPT-3 natural language models, LAMDA and other transformers. However, they are still controversial and on the verge of a paradox, because there are generally many hidden hypotheses and false ideas on the functioning of the brain and what thought means. There is no other way, but to explicitly reveal these hypotheses, then to explore how the processing of human information could be reproduced by the machines.
Recently, a team of AI scientists has undertaken an interesting experience. Using the popular GPT-3 neural model by AI, they refined it on the complete corpus of the works of Daniel Dennett, American philosopher, writer and scientist whose research focuses on the philosophy of spirit and science. The objective, as indicated by the researchers, was to see if the model of AI could answer philosophical questions in the same way as the way in which the philosopher himself would answer these questions. Dennett himself participated in the experience and answered ten philosophical questions, which were then introduced in the refined version of the GPT-3 transformer model.
The configuration of the experience was simple and simple. Ten questions were asked both to the philosopher and the computer. An example of questions is: “Human beings have free will? What kind or type of freedom are worth?” The AI was invited to the same questions increased with the context assuming that the questions are in an interview with Dennett. The computer responses were then filtered by the following algorithm: 1) The answer was truncated to be approximately the same length as the human response; 2) The answers containing revealing words (such as “interview”) have been deleted. Four answers generated by the AI were obtained for each question and no selection of cherries or assembly has been made.
How were the results evaluated? The criticisms were presented with a quiz, and the objective was to select the “correct” response from the lot of five, where the other four came from artificial intelligence. The quiz is available online so that anyone can try its detective skills, and we recommend that you try it to see if you can get out of it better than experts:
https://ucriverside.az1.qualtrics.com/jfe/form/sv_9hme3gzwivssk
The result was not entirely unexpected. “Even the competent philosophers who are work experts from Dan Dennett have a substantial difficulty in distinguishing the responses created by this language generation program from Dennett's own responses,” said the chief of research. The participant's responses were not much higher than a random supposition on some of the questions and a little better for others.
What ideas can we get from this research? Does this mean that GPT models are able to replace humans soon in many areas? Does this have something to do with thought, understanding natural language and artificial general intelligence? Will automatic learning produce results on the human level and when? These are important and interesting questions and we are still far from the final answers.
