
Artificial intelligence may not be able to achieve human-like cognition unless the programs are linked to robots and designed according to evolutionary principles, British researchers have found.
Revolutionary AI platforms that mimic human conversation, such as the hugely popular ChatGPT, will never reach class-like if they remain disembodied and only appear on a computer screen, despite their massive neural networks and the massive datasets that train them. Human cognitive ability, researchers at the University of Sheffield report in a new study.
ChatGPT is a chatbot that simulates a conversation with a human user who provides prompts to an AI platform, learning in a manner similar to a human child through supervised and unsupervised learning. Unsupervised learning requires the system to learn by trial and error, such as a human telling a chatbot to answer a prompt incorrectly, and building from that information. Supervised learning is more akin to children going to school and learning the material they need — an AI chatbot is trained on inputs that have predetermined outputs from which the program learns.
Tony Prescott and Stuart Wilson, professors of computer science at the University of Sheffield, have found that while artificial intelligence has the ability to mimic the way humans learn, these programs are unlikely to think exactly like humans unless given the opportunity to artificially feel and perceive the real world.
AI could become ‘Terminator’, surpassing humans in Darwinian rules of evolution, report warns
Artificial intelligence is cracking data in the near future. (iStock)
“ChatGPT and other large neural network models are an exciting development in the field of AI, showing that truly difficult challenges such as learning the structure of human language can be solved. However, it is unlikely that these types of AI systems will evolve to the point where they can be fully functional to the extent that if they continue to use the same approach to design, they will think like the human brain,” Prescott said, according to a University of Sheffield press release about the research.
AI won’t replace human artists, may lead to ‘most imaginative’ work: TOPAZ LABS CEO
The study, published in the research journal Science Robotics, argues that the development of human intelligence is due to complex brain subsystems common to all vertebrates. The researchers believe that this brain structure, combined with the experience of humans learning and improving through evolution in the real world, is rarely incorporated when building artificial intelligence systems.

The ChatGPT logo and the words AI Artificial Intelligence are seen in this illustration taken on May 4, 2023. (Reuters/Dado Ruvic/Illustration)
“AI systems are more likely to develop human-like cognitive abilities if they build architectures that can learn and improve in similar ways to the human brain and leverage its connections to the real world. Robotics can provide artificial Intelligent systems provide these connections—for example, through sensors such as cameras and microphones and actuators such as wheels and grippers. AI systems will be able to sense the world around them and learn like a human brain,” Prescott continued.
Humans struggle to understand difference between real or AI-generated images: study
“[S]These AIs may be good at certain types of perception, thinking, and planning, but poor at understanding and reasoning about the moral consequences of their actions. Therefore, as we build these more general systems, we must carefully consider safe AI and place safety at the heart of AI operating systems. “
In comments to Fox News Digital, Prescott added that “significant risks” around the system stem from learning to be opaque, and said he would like to “see greater transparency from companies developing AI while improving governance, which requires internationalization.” to be effective.”
“An opaque AI might behave in ways we don’t expect. By understanding how real brains control real bodies, we think these systems can become more transparent, and we can move toward having AI that better explains them How the decision was made,” Prescott said.
The professor also noted that there could be a risk of “a kind of general intelligence” that “matches or exceeds human capabilities in some domains but may be very underdeveloped in others”.
“For example, such AIs may be good at certain types of perception, thinking, and planning, but poor at understanding and reasoning about the moral consequences of their actions. Therefore, we must safely and carefully consider artificial intelligence when building these more general AIs. Intelligence-purpose systems and putting safety at the heart of AI operating systems. It should be possible. Just as we were able to make airplanes, cars and power stations safe, we should be able to do the same with AI and robots. I think it also means we’re going to need to negotiate, as we do in other industries, to make sure safety requirements are properly addressed,” he explained.

Human brain stimulation or activity with neurons close-up 3D rendered illustration. Neurology, cognition, neuron network, psychology, neuroscience science concept. (iStock)
There has been some progress in building AI platforms for robots that will enable a direct connection between technology and the real world, but these platforms are still a long way from mimicking the structure of the human brain, researchers say.
Click here for the Fox News app
“In recent decades, efforts to understand how real brains control bodies by building artificial brains for robots have led to exciting developments in robotics and neuroscience. After reviewing some of these efforts that have primarily focused on how artificial brains learn, we think the next breakthrough in artificial intelligence will come from more closely mimicking the development and evolution of the real brain,” Wilson said.