A new test of ChatGPT’s intellectual abilities shows once again that this AI-based chatbot exceeds the average capabilities of a human.
The release of GPT-4—the new large OpenAI language model that is expected to enhance the capabilities of ChatGPT— It has left many with their mouths open. Some experts even advise taking a short break when it comes to continuous innovation in AI.
Bearing in mind that this model has already been put to the test in some other exams such as the ‘Uniform Bar Exam’, the LSAT and the Biology Olympiad in the USA —extracting great results from all of them—, Scientific American has published a opinion article by psychologist Eka Roivainen in which describes his experience assessing with the Wechsler Adult Intelligent Scale (WAIS) at this chatbot.
“My first impressions were quite favourable. ChatGPT was almost an ideal tester, with a commendable attitude towards taking the test. He shows no test anxiety, lack of concentration or lack of effort. He also did not express skeptical and uninvited comments about the tests of intelligence and testers like me”Explain.
Regarding the exam, ChatGPT was tested on 5 of the 6 tests required to assess its capabilities. As explained in the article, the sixth measures short-term memory, so it could not be carried out.
ChatGPT tested once again: 155 IQ points
These 5 assessments were: Vocabulary, Similarities, Comprehension, Information, and Arithmetic. Roivainen took the questions and directly copied them into the chatbot to start the test.
Starting with the Vocabulary, the truth is that few doubts arise in relation to whether it will be accurate or not, since this is based on explaining concepts such as ‘gadget’ or ‘smartphone’. “ChatGPT outperformed it, giving answers that were often very detailed and complete and exceeded the correct answer criteria stated in the test manual”Add.
As for the Similarities and Information tests, he also managed to pass them without any problem. Note that the Similarities refers to the capabilities of this artificial intelligence to find similarities between two characters or objects they present.
On the other hand, Information is a test of general knowledge and reflects intellectual curiosity, level of education and the ability to learn and remember facts.
“In terms of general comprehension, ChatGPT correctly answered questions that are usually posed like this: ‘If your TV catches fire, what should you do?’ As expected, the chatbot solved all the arithmetic problems it received, answering questions that required, for example, taking the average of three numbers”points out.
As a result, verbal intelligence quotient ChatGPT was 155. That’s higher than 99.9% of the 2,450 test takers that make up the American WAIS III standardization sample. Mention that the average intelligence quotient is established at 100 points.
“Despite its high IQ, ChatGPT has been known to fail at tasks that require real human reasoning or an understanding of the physical and social world,” the psychologist finishes.