sourcegraph
October 3, 2023

Long before Elon Musk and Apple co-founder Steve Wozniak signed a letter warning that artificial intelligence poses “profound risks” to humanity, British theoretical physicist Stephen Hawking was already concerned The evolving technology has sounded the alarm.

“The development of fully artificial intelligence could herald the end of humanity,” Hawking told the BBC in a 2014 interview.

Hawking, who suffered from amyotrophic lateral sclerosis (ALS) for more than 55 years, died in 2018 at the age of 76. Despite his critical remarks about AI, he also used a very basic form of technology to communicate, as his disease weakened muscles and required Hawking to use a wheelchair.

Hawking was unable to speak in 1985 and had to rely on various means of communication, including a speech-generating device run by Intel that allowed him to use facial movements to select words or letters for synthesized speech.

Stephen Hawking says ‘without God’ in final book, humans would ‘live in space’

Stephen Hawking in 2007. (Bruno Vincent/Getty Images/File)

Hawking’s comments to the BBC in 2014 that artificial intelligence could “herald the end of the human race” responded to a question about the possible transformation of the speech technology he relies on. He told the BBC that very basic forms of artificial intelligence have proven powerful, but creating systems that can match or exceed human intelligence could be disastrous for humanity.

“It will take off on its own and redesign itself at an ever-increasing rate,” he said.

Bias, death, self-driving cars: AI ‘accidents’ will double as Silicon Valley kicks off tech race, experts say

Hawking added: “Humans, constrained by slow biological evolution, cannot compete and will be displaced.”

stephen hawking

Stephen Hawking unveiled on October 10, 1979 in Princeton, New Jersey. (Santi Visali/Getty Images)

Hawking’s last book hits the market a few months after his death. His book, “Brief Answers to Big Questions,” provides readers with answers to questions he is often asked. The science book lays out Hawking’s arguments against the existence of God, how humans could one day live in space and his fears of genetic engineering and global warming.

Artificial intelligence also topped his list of “big questions,” arguing that computers “may surpass humans in intelligence within 100 years.”

“We may be facing an intelligence explosion that will eventually lead to machines outsmarting us over snails,” he wrote.

Computers need to be trained to match human goals, he argued, adding that not taking the risks associated with artificial intelligence seriously could be “the worst mistake we’ve ever made”.

Tech experts warn AI could threaten human connection, romance: ‘The latest version of a long tradition’

“It’s easy to dismiss the concept of highly intelligent machines as pure science fiction, but that would be a mistake—and possibly the worst we’ve ever made.”

Hawking’s remarks echo concerns raised in a letter published in March by tech mogul Elon Musk and Apple co-founder Steve Wozniak this year. The two tech leaders joined thousands of other experts in signing a letter calling for at least a six-month moratorium on building AI systems more powerful than OpenAI’s GPT-4 chatbot.

Elon Musk

Tesla CEO Elon Musk (Patrick Pleul/Pool/Getty Images/File)

“AI systems with human-competing intelligence could pose profound risks to society and humanity, as recognized by extensive research and top AI labs,” read the letter, published by the nonprofit Future of Life.

Stephen Hawking’s final paper revealed

OpenAI’s ChatGPT became the fastest-growing user base in January, reaching 100 million monthly active users, as people around the world rushed to use chatbots that simulate human-like conversations based on given prompts. The lab released the latest version of the platform, GPT-4, in March.

ChatGPt website

As people around the world scramble to use chatbots, OpenAI’s ChatGPT had the fastest-growing user base in January, reaching 100 million monthly active users. (Eduardo Parra/Europa Press via Getty Images/File)

Despite calls for a moratorium on research at AI labs that are working on techniques beyond GPT-4, the system’s release was a watershed moment that reverberated across the tech industry and spurred companies to race to build their own AI systems.

Stephen Hawking says AI can ‘destroy’ humanity

Google is working to overhaul its search engine, even creating a new one that relies on artificial intelligence; Microsoft has launched a “new Bing” search engine, described as an “AI co-pilot for the web” for users; Musk says he will launch A rival AI system, which he described as “maximally truth-seeking”.

stephen hawking

Stephen Hawking on display at the One World Observatory in New York City on April 12, 2016. (Brian Bader/Getty Images)

Hawking suggested a year before his death that the world needed to “learn how to prepare for and avoid the potential risks associated with artificial intelligence”, arguing that these systems “could be the worst event in the history of our civilization”. However, he did point out that the future is still unknown and that if trained properly, artificial intelligence could be beneficial to humans.

Click here for the Fox News app

“Successfully creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we have no way of knowing whether we will be infinitely helped by AI, or ignored and marginalized by it, Or conceivably be destroyed by it,” Hawking said in a speech at the 2017 Web Summit Technology Conference in Portugal.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *