In recent weeks, two members of Congress have sounded the alarm about the dangers of artificial intelligence.
Rep. Ted Lieu, D-Calif., wrote in a New York Times guest post in January that he was “horrified” by the ChatGPT chatbot’s ability to mimic human writers.Another Democrat, Rep. Jake Auchincloss of Massachusetts, delivered a one-minute speech written by a chatbot calling for AI regulation
But even as lawmakers pay attention to the technology, few are doing anything about it. No bill is proposed to protect individuals or stem developments in potentially dangerous aspects of artificial intelligence. Legislation introduced in recent years to curb applications of artificial intelligence such as facial recognition has also died down in Congress.
The problem is most lawmakers don’t even know what AI is, said Rep. Jay Obernolte, a California Republican who is the only member of Congress with a master’s degree in AI.
“Before regulation, there needs to be agreement on what the danger is, and that requires a deep understanding of what AI is,” he said. “You’d be surprised how much time I spend explaining to my colleagues that the main danger of AI won’t come from evil robots with red lasers in their eyes.”
The inaction on AI is part of a familiar pattern in which technology has once again overtaken U.S. rulemaking and regulation. Lawmakers have long struggled to understand the new innovation, having described the internet as “a series of pipes.” Companies have long pushed for looser regulation, saying the industry needs few obstacles as the U.S. battles China for technological leadership.
That means Washington is taking a hands-off stance as the AI boom sweeps Silicon Valley, with Microsoft, Google, Amazon and Meta racing to develop the technology. The spread of artificial intelligence, which has spawned chatbots that can write poetry and self-driving cars, has sparked debate over its limits, with some fearing the technology could eventually replace humans in jobs, or even become sentient.
Carly Kind, director of the Ada Lovelace Institute, a London-based organization that focuses on the responsible use of technology, said the lack of regulation encouraged companies to prioritize financial and commercial interests at the expense of security.
“By failing to build such guardrails, policymakers are creating the conditions for irresponsible AI competition,” she said.
In a regulatory vacuum, the European Union has played a leading role. In 2021, EU policymakers proposed a law focused on regulating AI technologies that could cause the most harm, such as facial recognition and applications related to critical public infrastructure such as water supply. The measure, expected to be passed as early as this year, would require AI makers to conduct risk assessments of how their applications might affect health, safety and individual rights such as free speech.
Companies that violate the law could be fined as much as 6% of their global revenue, which could total billions of dollars for the world’s largest tech platforms. EU policymakers say laws are needed to maximize the benefits of artificial intelligence while minimizing its societal risks.
Rep. Donald S. Beyer Jr., D-Virginia, who recently started an evening college course on artificial intelligence, said: “We’re just beginning to understand this technology and we’re weighing its enormous benefits against Potentially dangerous.”
U.S. lawmakers will review the European bill for regulatory ideas, Mr Beyer said, adding, “It will take time.”
In fact, the federal government has been deeply involved in AI for more than 6 years. In the 1960s, the U.S. Defense Advanced Research Projects Agency (DARPA) began funding research and development of this technology. This support has helped enable military applications such as drones and cybersecurity tools.
Until January 2015, physicist Stephen Hawking and Elon Musk (CEO of Tesla, now owner of Twitter) warned that artificial intelligence was becoming dangerously intelligent and could lead to the extinction of humanity, Criticism of AI in Washington has largely died down. They called for regulations.
In November 2016, when the Senate Space, Science and Competitiveness Subcommittee held its first congressional hearing on artificial intelligence, lawmakers twice cited Mr. Musk’s warnings. At the hearing, academics and the chief executive of OpenAI, a San Francisco lab, dismissed Mr. Musk’s predictions, or that they were at least many years away.
Some lawmakers have emphasized the importance of national leadership in AI development. Congress must “ensure that America remains a global leader throughout the 21st century,” Sen. Ted Cruz, Republican of Texas and chairman of the subcommittee, said at the time.
DARPA then announced that it was earmarking $2 billion For artificial intelligence research projects.
Warnings about the dangers of AI have intensified in 2021 as the Vatican, IBM and Microsoft pledged to develop “ethical artificial intelligence,” meaning organizations are transparent about how the technology works, respect privacy and minimize bias. The group called for regulation of facial recognition software, which uses large databases of photos to determine people’s identities. In Washington, some lawmakers are trying to create rules for facial recognition technology and corporate audits to prevent discriminatory algorithms. Bills go nowhere.
“It’s not a priority, it’s not an urgency for members,” said Mr Beyer, who last year failed to secure enough support to pass a bill on an audit of artificial intelligence algorithms, which was sponsored by New York Initiated by Rep. Yvette D. Clarke, D-Mass. .
More recently, some government officials have attempted to bridge the knowledge gap surrounding AI. In January, about 150 lawmakers and their staff convened a meeting chaired by the usually sleepy AI Caucus and attended by Jack Clark, founder of AI firm Anthropic. Meeting.
Federal agencies are taking some action around AI, and these agencies are enforcing laws that are already on the books.The Federal Trade Commission brought execution order Targeting companies that use artificial intelligence in violation of its consumer protection rules.Consumer Financial Protection Bureau also warned Opaque AI systems used by credit institutions could run afoul of anti-discrimination laws.
The FTC also proposed business regulations to curb data collection used in AI technologies, and Food and Drug Administration issued a list The application of artificial intelligence technologies in medical devices within its purview.
In October, the White House released a blueprint for AI rules that emphasizes individual privacy and safe automated systems, freedom from algorithmic discrimination and meaningful human alternatives.
But none of those efforts became law.
“Congress is bleak,” said Amba Kak, executive director of the AI Now Institute, a nonprofit research center that recently advised the FTC. “The stakes are high because these tools are being used in very sensitive areas of society, like hiring , housing and credit, and there is real evidence that artificial intelligence tools have been flawed and biased for years.”
Tech companies have lobbied against policies that limit how they use artificial intelligence and have called for mostly voluntary regulation.
In 2020, Sundar Pichai, CEO of Google parent company Alphabet, visited brussels Arguing for “sensible regulation” that would not hinder the technology’s potential benefits. That same year, the U.S. Chamber of Commerce and more than 30 companies, including Amazon and Meta, lobbied against the facial recognition bill, according to OpenSecrets.org.
“We’re not against regulation, but we want sensible regulation,” said Jordan Crenshaw, vice-president of the chamber of commerce, which argued the draft EU law was too broad and could hinder technology development.
In January, Sam Altman, CEO of OpenAI, which created ChatGPT, visited several members of Congress to demonstrate GPT-4, a new AI model that can write papers, solve complex coding problems and more, according to Beyer Mr. and Mr. . instead. Mr. Altman, who has expressed support for regulation, showed how GPT-4 would have greater security controls than previous AI models, lawmakers said.
Mr Liu, who met Mr Altman, said the government could not rely on individual companies to protect users. He plans to introduce a bill this year to create a commission to study artificial intelligence and create a new agency to oversee it.
“OpenAI has decided to take control of its technology, but how can you guarantee that another company will do the same?” he asked.