Viral Tech Only

Microsoft calls for AI rules to minimize risks

On Thursday, Microsoft passed a series of artificial intelligence regulations to address concerns from governments around the world about the risks of the rapidly evolving technology.

Microsoft has promised to build artificial intelligence into many of its products, and the proposed regulations include requiring systems used in critical infrastructure to shut down or slow down completely, similar to emergency braking systems on trains. The company also called on the law to clarify when additional legal obligations apply to AI systems, and to require labels to clearly state when a computer-generated image or video was created.

“Companies need to step up,” Microsoft President Brad Smith said in an interview about the push for regulation. “The administration needs to move fast.” He presented the proposals to an audience that included lawmakers at an event in downtown Washington on Thursday morning.

The call for regulations marks a boom in AI, following a wave of interest sparked by the release of the ChatGPT chatbot in November. Since then, companies including Microsoft and Google parent Alphabet have raced to integrate the technology into their products. That has fueled concerns that these companies are sacrificing security to get the next big thing ahead of their competitors.

Lawmakers have publicly expressed concern that artificial intelligence products that can generate text and images on their own will create a flood of disinformation, be exploited by criminals and put people out of work. Regulators in Washington have pledged to be vigilant against scammers using AI and situations where the system perpetuates discrimination or makes illegal decisions.

In response to this scrutiny, AI developers have increasingly called for shifting some of the burden of regulating the technology onto governments. OpenAI CEO Sam Altman, who made ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that the government must regulate the technology.

The move echoes calls by internet companies such as Google and Facebook parent Meta to enact new privacy or social media laws. In the United States, lawmakers have been slow to act after such calls, and there have been few new federal regulations on privacy or social media in recent years.

In an interview, Mr. Smith said Microsoft was not trying to shirk responsibility for managing the new technology because it offered specific ideas and promised to implement some of them regardless of government action.

There is no attempt to shirk responsibility in the slightest,” he said.

He echoed the idea Mr. Altman supported in congressional testimony that government agencies should require companies to obtain a license to deploy “high-performance” AI models.

“That means you notify the government when you start testing,” Mr Smith said. “You have to share the results with the government. Even if it gets permission to deploy, it’s your responsibility to continue to monitor it and report to the government if something goes wrong.”

Microsoft, which earned more than $22 billion from its cloud computing business in the first quarter, also said these high-risk systems should be allowed to run only in “licensed AI data centers.” Mr Smith acknowledged that the company would not be “at a disadvantage” in providing such services, but said many US competitors could also offer them.

Governments should designate certain AI systems used in critical infrastructure as “high risk” and require them to be equipped with “safety brakes,” Microsoft added. It compared the feature to “braking system engineers who have long built it into other technologies such as elevators, school buses and high-speed trains.”

In some sensitive cases, companies that provide artificial intelligence systems should know certain information about their customers, Microsoft said. To protect consumers from deception, AI-created content should be specially labeled, the company said.

Mr Smith said companies should be held legally “responsible” for harms related to AI, saying that in some cases, the responsible party could be the developer of an application that uses someone else’s underlying AI technology, such as Microsoft’s Bing search engine. He added that cloud companies may have a responsibility to comply with security regulations and other rules.

“We may not necessarily have the best information or the best answers, or we may not be the most credible speakers,” Mr Smith said. “But, you know, right now, especially in Washington, D.C., people are looking for ideas.”



Source link

Exit mobile version