sourcegraph
July 26, 2024

Seven leading U.S. artificial intelligence companies have agreed to voluntary safeguards in the development of the technology, the White House announced Friday, pledging to manage the risks of new tools as they compete for the potential of artificial intelligence.

Seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally committed to setting new standards for safety, security and trust when they met with President Biden at the White House on Friday afternoon.

“We must be sober and alert to the threats posed by emerging technologies that may — not necessarily but could — pose to our democracy and our values,” Biden said in brief remarks from the Roosevelt Room of the White House.

“It’s a serious responsibility; we have to get things done,” he said, accompanied by company executives. “And there’s huge potential.”

The news comes as companies race to roll out versions of artificial intelligence that offer powerful new ways to create text, photos, music and video without human input. But as artificial intelligence becomes more sophisticated and human, the technological leap has raised concerns about the spread of disinformation and dire warnings of “extinction risk”.

The voluntary safeguards are only early tentative steps as governments in Washington and around the world seek to create legal and regulatory frameworks for the development of artificial intelligence. These agreements include testing products for safety risks and using watermarks to ensure consumers can discover AI-generated material.

But lawmakers have struggled to regulate social media and other technologies in a way that keeps pace with rapidly evolving technology.

The White House did not provide details of an upcoming presidential executive order aimed at addressing another issue: how to control the ability of China and other competitors to acquire new artificial intelligence programs or the components used to develop them.

The order is expected to involve new restrictions on advanced semiconductors and restrictions on the export of large language models. This software is hard to keep secure – most of it can be compressed and installed on a thumb drive.

An executive order is likely to draw more industry opposition than Friday’s voluntary pledges, which experts say are already reflected in the practices of the companies involved. These commitments will not limit the plans of AI companies, nor will they impede the development of their technologies. As voluntary commitments, they are not enforced by government regulators.

“We are pleased to join others in the industry in making these voluntary commitments,” Nick Clegg, president of global affairs at Facebook parent company Meta, said in a statement. “They are an important first step in ensuring responsible guardrails for artificial intelligence and set an example for other governments.”

As part of the safeguards, the companies agreed to conduct safety testing, in part by independent experts; research on bias and privacy concerns; sharing information about risks with governments and other organizations; developing tools to address societal challenges such as climate change; and transparency measures for identifying AI-generated material.

In a statement announcing the agreements, the Biden administration said the companies must ensure “innovation does not come at the expense of the rights and safety of Americans.”

“Companies that are developing these emerging technologies have a responsibility to ensure the safety of their products,” the U.S. government said in a statement.

Microsoft President Brad Smith, one of the executives at the White House meeting, said his company supports voluntary safeguards.

“By acting quickly, the White House’s commitment lays the groundwork to help ensure that the promise of artificial intelligence stays ahead of its risks,” Mr. Smith said.

Anna Makanju, OpenAI’s vice president of global affairs, described the announcement as “part of our ongoing work with governments, civil society organizations, and others around the world to advance AI governance.”

For these companies, the standards described Friday serve two purposes: to prevent or shape legislative and regulatory moves through self-regulation, and to show that they are dealing with new technologies thoughtfully and proactively.

But the rules they agree on are largely the lowest common denominator, which each company interprets differently. For example, the companies pledged to take strict cybersecurity measures on the data used to create the language models that generate artificial intelligence programs. It wasn’t specified what that meant, and the companies would be interested in protecting their intellectual property anyway.

Even the most cautious companies are vulnerable. Microsoft, one of the companies attending the White House event with Biden, scrambled last week to respond to a Chinese government-organized hack of private emails of U.S. officials dealing with China. It now appears that China stole or somehow obtained a “private key” held by Microsoft that is key to authenticating emails — one of the company’s most closely guarded codes.

Given these risks, the agreement is unlikely to slow down efforts to pass legislation and enforce regulations on emerging technologies.

Paul Barrett, associate director of New York University’s Stern Center for Business and Human Rights, said more needs to be done to guard against the dangers AI poses to society.

“The voluntary commitments announced today are not enforceable, which is why Congress and the White House must immediately enact legislation requiring transparency, privacy protections, and enhanced research into the broad risks posed by the generation of artificial intelligence,” Barrett said in a statement.

European regulators are poised to pass artificial intelligence laws later this year, prompting many companies to encourage U.S. regulation. Several lawmakers have introduced bills that include licenses for AI companies to release their technology, the creation of a federal agency to oversee the industry, and data privacy requirements. But lawmakers are far from agreeing on the rules.

Lawmakers have been grappling with how to respond to the rise of artificial intelligence technology, with some concerned about the risks to consumers and others deeply concerned about falling behind rivals, particularly China, in the race for dominance in the field.

This week, the House China Competition Committee sent a bipartisan letter to U.S. venture capital firms demanding a reckoning for their investments in Chinese artificial intelligence and semiconductor companies. For months, multiple panels in the House and Senate have been asking the AI ​​industry’s most influential entrepreneurs and critics to determine what legislative guardrails and incentives Congress should explore.

Many of these witnesses, including OpenAI’s Sam Altman, implored lawmakers to regulate the AI ​​industry, pointing to the potential for undue harm from new technologies. But that regulation has been slow to move through Congress, and many lawmakers are still struggling to grasp what AI technology really is.

In an effort to improve understanding among lawmakers, Sen. Chuck Schumer, D-N.Y., the majority leader, began a series of meetings this summer to hear from administration officials and experts on the benefits and dangers of artificial intelligence in multiple areas.

Karen Demirjian Reporting from Washington also contributed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *