sourcegraph
September 30, 2023

The co-founder of the AI ​​safety nonprofit told Fox News that an AI arms race between countries and corporations to see who can develop the most powerful AI-powered machines could pose an existential threat to humanity.

“AI may pose a risk of extinction, in part because we’re currently caught in an AI arms race,” said Dan Hendrycks, executive director of the Center for AI Safety. “We’re building increasingly powerful technologies, but we don’t know how to fully control or understand them.”

OpenAI CEO Sam Altman signed a statement from the Center for AI Safety saying AI poses an existential threat to humanity. (Bill Clark/CQ-Roll Call, Inc via Getty Images)

“We did the same with nuclear weapons,” he continued. “We’re all in the same boat when it comes to existential risk and extinction risk.”

AI arms race could lead to extinctionHuman Level Activity: Director of AI Safety

Watch more FOX News digital original content here

hendricks co. issue a statement Tuesday warned that “[m]Along with other societal-scale risks such as pandemics and nuclear war, addressing the risk of AI extinction should become a global priority. Many top AI researchers, developers and executives, such as OpenAI CEO Sam Altman and “Godfather of AI” Jeffrey Hinton, signed the statement.

Altman recently testified before Congress arguing for government regulation of AI to “mitigate” the risks posed by the technology.

“My concern is that the development of artificial intelligence is a relatively uncontrolled process and that artificial intelligences will end up gaining more influence in society because they are so good at automating,” Hendricks, who also signed him, told Fox News. A statement from your organization. “They’re competing with each other, and with this ecosystem of agents running a lot of operations, we could lose control of the process.”

“This could make us a second-class species, or go the way of the Neanderthals,” he continued.

Millions of fast food workers could be out of work within 5 years.that’s why

Tesla CEO Elon Musk has been outspoken about the potential threat of artificial intelligence, saying the technology could lead to “civilization destruction” or election meddling. Musk also signed a letter in March arguing for a moratorium on large AI experiments.

Elon Musk

Elon Musk has warned that artificial intelligence could lead to the “destruction of civilization”. (Justin Sullivan/Getty Images)

However, the letter failed to prompt large AI developers such as OpenAI, Microsoft and Google to suspend experiments.

“We’re in an AI arms race that has the potential to bring us to the brink of disaster like a nuclear arms race,” Hendricks said. “So that means we need to prioritize this globally.”

Click here for the Fox News app

But organizations creating the world’s most powerful AI systems have no incentive to slow down or pause development, Hendrycks warns. The Center for AI Safety hopes its statement will inform people that AI poses credible and significant risks.

“Hopefully now we can start a conversation so it can be addressed like other global priorities, like international agreements or regulations,” Hendricks told Fox News. “We need to see this as a bigger priority, social priorities and technical priorities to mitigate these risks.”

To watch the full interview with Hendrycks, click here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *