sourcegraph
October 2, 2023

Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House on Friday announcing voluntary AI safety pledges by seven tech companies.

But a closer look at the activity raises questions about what the actions mean for policy around the fast-moving technology.

The answer is that it doesn’t make much sense at the moment. Lawmakers and policy experts say the U.S. is only just beginning to create rules for artificial intelligence, and it could be a long and difficult road. Despite hearings at the White House, meetings with top tech executives and speeches introducing the AI ​​bill, it’s too early to predict the roughest sketches of regulations to protect consumers and curb the technology’s risks to jobs, the spread of disinformation and safety.

“It’s early days and no one knows what the law will look like yet,” said Chris Lewis, president of Public Knowledge, a consumer group that has called for an independent body to regulate artificial intelligence and other tech companies.

The U.S. remains far behind Europe, where lawmakers are preparing to enact an artificial intelligence law this year that would impose new restrictions on uses considered the riskiest of the technology. In contrast, the U.S. remains largely divided about the best way to handle the technology, and many U.S. lawmakers are still trying to understand it.

That suits many tech companies, policy experts say. While some companies say they welcome rules on artificial intelligence, they also oppose strict rules similar to those set in Europe.

Below is an overview of the state of AI regulation in the United States.

The Biden administration has been on quick listening tours with AI companies, academia and civil society groups. The work began in May, when Vice President Kamala Harris met with the CEOs of Microsoft, Google, OpenAI and Anthropic at the White House and pushed for a greater emphasis on security in the tech industry.

Representatives of seven technology companies appeared at the White House on Friday to announce a set of principles to make their artificial intelligence technologies safer, including third-party security checks and watermarking of AI-generated content to help stop the spread of misinformation.

Many of the announced practices are already implemented at OpenAI, Google, and Microsoft, or will soon be in effect. They do not represent new regulations. Promises of self-regulation have also fallen short of expectations from consumer groups.

“For big tech companies, voluntary commitments are not enough,” said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must establish meaningful, enforceable guardrails to ensure fair and transparent use of AI, and to protect individuals’ privacy and civil rights.”

Last fall, the White House unveiled a blueprint for an AI Bill of Rights, a set of consumer protection guidelines for the technology. The guidelines are not regulations, nor are they enforceable. This week, White House officials said they were working on an executive order on artificial intelligence, but gave no details or timing.

The loudest drumbeat for regulating AI has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include creating an agency to oversee AI, taking responsibility for AI technologies that spread disinformation and requiring licensing for new AI tools.

Lawmakers have also held hearings on artificial intelligence, including in May with OpenAI (maker of the ChatGPT chatbot) CEO Sam Altman. Some lawmakers discussed other regulatory ideas at the hearing, including nutrition labels that would inform consumers of the risks of artificial intelligence.

These bills are in their early stages and so far have not had the support needed to move forward. Last month, Senate Leader Chuck Schumer, D-N.Y., announced a multi-month AI legislative development process that will include educational sessions for lawmakers in the fall.

“In many ways we are starting from scratch, but I believe Congress can rise to the challenge,” he said in a speech at the Center for Strategic and International Studies at the time.

Regulators start to take action to police some of the problems created by artificial intelligence

Last week, the FTC launched an investigation into OpenAI’s ChatGPT and requested information about how the company secures its systems and how the chatbot could potentially harm consumers by creating false information. FTC Chair Lena Khan said she believed the agency had sufficient powers under consumer protection and competition laws to police problematic practices by artificial intelligence companies.

“Waiting for Congress to act is not ideal, given the timeline on which Congress typically acts,” said Andres Savage, a law professor at the University of Miami.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *