sourcegraph
May 8, 2024

European lawmakers are finalizing work on an AI bill. The Biden administration and congressional leaders have laid out plans to curb artificial intelligence. Last week, in Senate testimony, OpenAI CEO Sam Altman suggested creating a federal agency with oversight and licensing powers. The topic was brought up at the G7 summit in Japan.

In its comprehensive plans and commitments, New York City has become a humble pioneer in AI regulation.

The city government passed a law in 2021 and passed Specific provisions of the previous month For one high-stakes application of the technology: hiring and promotion decisions. Enforcement began in July.

The city’s law requires companies that use artificial intelligence software in recruiting to notify candidates that automated systems are being used. It also requires companies to have independent auditors check the technology for bias every year. Candidates can request and be informed what data is being collected and analyzed. Companies will be fined for non-compliance.

New York City’s focused approach represents an important frontier in AI regulation. At some point, experts say, rough principles laid out by governments and international organizations must be translated into details and definitions. Who is affected by technology? What are the pros and cons? Who can intervene and how?

“You can’t answer these questions without specific use cases,” said Julia Stoyanovich, associate professor and director of the Center for Responsible Artificial Intelligence at New York University.

But even before it went into effect, New York City’s law had drawn criticism. Public interest advocates say that’s not enough, while business groups say it’s impractical.

Complaints from both camps point to the challenges of regulating AI, which is advancing at a rapid pace with unclear consequences, fueling both enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms Stojanovic fears there are loopholes in the city law that could weaken it. “But it’s much better than no law,” she said. “Unless you try to regulate, you don’t learn how to regulate.”

The law applies to companies with employees in New York City, but labor experts expect it to affect practices across the country. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also enacting laws to regulate the role of AI in hiring. Illinois and Maryland have enacted laws restricting the use of certain AI technologies, often for workplace surveillance and job applicant screening.

The laws of New York City emerged from a conflict of sharply contradictory views. The city council passed the bill in the final days of Mayor Bill de Blasio’s term. Several rounds of hearings and more than 100,000 words of public comment ensued — overseen by the city’s rulemaking agency, the Department of Consumer and Worker Protections.

The result, some critics say, has been overly sympathetic to commercial interests.

“What could have been a landmark law was watered down to lose its effectiveness,” said Alexandra Givens, president of the Center for Democracy and Technology, a policy and civil rights group.

That’s because the law defines “automated employment decision-making tools” as technologies designed to “substantially assist or replace autonomous decision-making,” she said. Ms. Givens said the city had adopted rules that appeared to interpret the wording so narrowly that an audit would only be required if AI software was the sole or primary factor in hiring decisions, or was used to overrule humans.

That precludes the primary way to use automated software, with hiring managers always making the final choice, she said. The potential for AI-driven discrimination often comes from screening hundreds or thousands of candidates down to a select few or from targeted online recruiting to generate candidate pools, she said.

Ms Givens also criticized the law for limiting the kinds of groups who could be treated unfairly. It covers bias based on gender, race and ethnicity, but not discrimination against older workers or people with disabilities.

“My biggest fear is that this will become a template across the country when we should be asking more from our policy makers,” Ms Givens said.

City officials said the scope of the law was narrowed to sharpen it and make sure it was relevant and enforceable. The committee and worker protection agencies listened to many voices, including public interest activists and software companies. The goal, officials say, is to weigh innovation against potential harm.

“This is a significant regulatory success in ensuring that AI technologies are used ethically and responsibly,” said Robert Holden, who chaired the committee’s technology committee when the law was passed and remains a committee member.

New York City is trying to address new technology in the context of federal workplace laws and has hiring guidelines dating back to the 1970s. The Equal Employment Opportunity Commission’s key rule states that any practices or selection methods used by employers should not “disproportionately affect” women or groups protected by law, such as minorities.

The business community has criticized the law. In a document this year, the Software Alliance, an industry group that includes Microsoft, SAP and Workday, said the requirement for an independent audit of AI was “not feasible” because “the field of auditing is nascent” and lacks standards and a professional watchdog .

But an emerging field is a market opportunity. Experts say the AI ​​audit business will only grow. It has attracted law firms, consultants and start-ups.

Companies that sell artificial intelligence software to aid in hiring and promotion decisions have generally embraced regulation. Some have been subject to external audits. They see this requirement as a potential competitive advantage, demonstrating that their technology broadens the company’s pool of candidates and increases opportunities for workers.

“We believe we can follow the law and show what good AI is,” said Roy Wang, general counsel at Eightfold AI, a Silicon Valley startup that makes software to assist hiring managers.

New York City law also takes a potentially normative approach to AI regulation. A key measure of the law is an “impact rate,” a calculation of the impact of using the software on a protected group of job applicants. It doesn’t delve into how algorithms make decisions, a concept known as “interpretability.”

Critics say that in a life-changing application like hiring, people have the right to demand an explanation of how the decision was reached. But some experts say AI like software like ChatGPT is becoming more sophisticated, potentially putting the goal of explainable AI out of reach.

“The focus becomes the output of the algorithm, not the work of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which is developing certification for the safe use of AI applications in the workplace, healthcare and finance.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *