OAN’s Roy Francis
10:10 AM – Wednesday, August 2, 2023
A recent study showed ChatGPT scoring on the same level as undergraduate students when answering reasoning question that are common on standardized testing.
Advertisement
The new research, which was conducted by the University of California Los Angeles and published in Nature Human Behavior, was aimed at assessing the model’s ability to understand and respond to complex information.
The program scored around 80% correct on IQ questions which were based on Raven’s Progressive Matrices, a nonverbal ability test which is used to assess abstract reasoning. The average person scores around 60% correct on the same test.
In the new study, ChatGPT was also particularly successful in analogical reasoning, which was believed to be a problem-solving trait unique to humans. Analogical reasoning uses rational thoughts, and logical examples to deduce solutions to questions and problems that are presented.
The program also scored “better than the average score for the humans” when answering analogy questions from the SAT according to the researchers.
Furman University assistant philosophy professor Darren Hick had previously expressed concern that the artificial intelligence program would continue to learn from its mistakes. He feared that the work that programs such as ChatGPT would be producing in the future would be very similar, if not the same, to human work to the point where the difference is unrecognizable.
According to the senior study author and UCLA psychology professor, Hongjing Lu, the new findings from the study proved Hick’s fears were well founded.
“Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well,” she said. “Language learning models are just trying to do word prediction so we’re surprised they can do reasoning.”
Co-author of the study, Keith Holyoak said the program “might be kind of thinking like a human” but that he would like to investigate further, and figure out if that was how the program really learned and received information.
“GPT-3 might be kind of thinking like a human,” Holyoak explained. “People did not learn by ingesting the entire internet, so the training method is completely different [than that of people]. We’d like to know if it’s really doing it the way people do, or if it’s something brand new — a real artificial intelligence — which would be amazing in its own right.”
The new study points towards ChatGPT’s significant progress in natural language understanding, which could potentially lead to the application of the program in multiple fields such as customer service, content generation, and academic research.
Even though the program has been becoming more advanced and improving its performance, the program may still provide incorrect or nonsensical answers. OpenAI, the company behind ChatGPT, says that such instances point towards areas within the program that need improvement, and that they are continuing their work towards enhancing the model’s capabilities.
The recently conducted research had used ChatGPT-3 for the most part, instead of the newer more advanced ChatGPT-4 model.
Stay informed! Receive breaking news blasts directly to your inbox for free. Subscribe here. https://www.oann.com/alerts
Be the first to comment