sourcegraph
June 9, 2023

Credit…Siri Rios/The New York Times

Eight years after the controversy over black people being mislabeled as gorillas by image analysis software — despite huge advances in computer vision — tech giants are still afraid of repeating the same mistakes.


When Google released its standalone Photos app in May 2015, people marveled at what it could do: analyze images to label the people, places and things in them, a shocking consumer service at the time. But a few months after its release, software developer Jacky Alciné found that Google had tagged a photo of him with a friend who was also black as a “gorilla,” a term that was particularly offensive because it echoed centuries-old Racism trope.

In the ensuing controversy, Google blocked its software from classifying anything in a photo as a gorilla and vowed to fix the problem. Eight years later, with major advances in artificial intelligence, we tested whether Google had solved the problem, and we looked at similar tools from its competitors Apple, Amazon, and Microsoft.

Google and Apple were able to identify a member of the primate family, the lemur, a long-tailed animal that shares an opposing thumb with humans but is more distantly related to the great apes.

Google’s and Apple’s tools are clearly the most advanced when it comes to image analysis.

However, Google, whose Android software underpins most of the world’s smartphones, has decided to turn off the ability to visually search for primates so as not to make the offensive mistake of labeling people as animals. Apple, which uses similar technology to Google in our tests, also seems to disable the ability to find monkeys and apes.

Consumers may not need to perform such searches very often—though in 2019, one iPhone user complained on Apple’s customer support forums that the software “I can’t find monkeys in the photos on my deviceBut that issue begs the larger question of other unfixed or unfixable flaws lurking in services that rely on computer vision — a technology that interprets visual images — and other AI-powered products

Mr Alciné was dismayed to learn that Google still hadn’t quite figured things out, and said society placed too much trust in technology.

“I will forever lose faith in this kind of artificial intelligence,” he said.

Computer vision products are now being used for mundane tasks like sending an alert when there is a package at the door, as well as for important tasks like navigating cars and finding perpetrators in law enforcement investigations.

Mistakes can reflect the racist attitudes of those encoding the data. In the case of Gorilla, two former Google employees who worked on the technology said the problem was that the company didn’t include enough images of black people in the image sets used to train its AI systems. As a result, the technology was not familiar enough with dark-skinned people and mistook them for gorillas.

As artificial intelligence becomes more and more pervasive in our lives, it raises fears of unintended consequences. While computer vision products and AI chatbots like ChatGPT are different, both rely on the underlying data that trains the software, and both can fail due to flaws in the data or bias in the code.

After inciting inappropriate conversations, Microsoft recently restricted users’ ability to interact with a chatbot built into its search engine, Bing.

Microsoft’s decision, like Google’s choice to prevent its algorithms from recognizing gorillas entirely, illustrates a common industry approach of blocking malfunctioning tech features rather than fixing them.

“It’s important to address these issues,” said Vicente Ordóñez, a Rice professor who studies computer vision. “How can we trust this software to be used in other scenarios?”

Google has prevented its Photos app from labeling anything as a monkey or ape because it believes the benefits of doing so “do not outweigh the risk of harm,” Google spokesman Michael Marconi said.

Apple declined to comment on the inability of users to search for most primates in its app.

Representatives for Amazon and Microsoft said the companies are always looking to improve their products.

When Google developed its Photos app eight years ago, it collected reams of images to train artificial intelligence systems to recognize people, animals and objects.

Its major oversight — not having enough images of black people in its training data — led to the app’s later glitches, two former Google employees said. The company failed to spot the Gorilla problem at the time because it didn’t ask enough employees to test the feature before its public debut, former employees said.

Google apologized for the gorilla incident, but it was one of many incidents in the broader tech industry that have led to allegations of bias.

Other products under criticism include HP’s face-tracking webcamit fails to detect some dark-skinned people, while apple watch, Among them, according to go to court, failed to accurately read the blood oxygen levels of different skin tones. These missteps show that tech products aren’t designed for people with darker skin tones. (Apple points to to a piece of paper It detailed its efforts to test its blood oxygen app on a “wide range of skin types and tones” starting in 2022. )

Years after the Google Photos bug, the company encountered similar problems with its Nest home security cameras during internal testing, according to a person familiar with the matter who worked at Google at the time. The Nest camera, which uses AI to determine whether someone on a property is familiar or unfamiliar, mistook some black people for animals. Google rushed to fix the problem before users could access the product, the person said.

However, Nest customers continued to complain about other flaws on company forums. In 2021, a customer was alerted that his mother was ringing the doorbell, only to find his mother-in-law on the other side of the door. When users complained that the system confused faces they’d marked as “familiar,” a customer support representative in the forum advised them to remove all tags and start over.

“Our goal is to prevent mistakes like this from happening,” said Mr. Marconi, a Google spokesman, adding that the company had improved its technology “by collaborating with experts and diversifying our image datasets.”

In 2019, Google attempted to improve facial recognition on Android smartphones by increasing the number of dark-skinned people in its dataset. But contractors hired by Google reportedly used a troubling tactic to make up for the lack of diverse data when they collected facial scan data: They targeted homeless people and students. Google executives called the incident “very disturbing” at the time.

While Google works behind the scenes to improve the technology, it never allows users to judge those efforts.

Margaret Mitchell, a researcher and co-founder of Google’s Ethical AI group, joined the company after the gorilla incident and worked with the Photos team. She said in a recent interview that she supports Google’s decision to remove “the Gorilla tag, at least temporarily.”

“You have to think about how often people need to label gorillas without perpetuating harmful stereotypes,” Dr. Mitchell said. “The benefits do not outweigh the potential harm of doing the wrong thing.”

Prof. Dr. Ordóñez speculates that Google and Apple can now tell the difference between primates and humans, but they don’t want to enable the feature because of the reputational risk it could pose if it fails again.

Google has since released a more powerful image analysis product, Google Lens, a tool for web searches using photos rather than text. wired In 2018 it was discovered that the tool also failed to identify gorillas.

Dr. Mitchell, who is no longer at Google, said the systems were never foolproof. With billions of people using Google’s services, even rare glitches that occur with just one in a billion users can surface.

“It only takes one mistake to have huge social impact,” she said, calling it “the needle in the haystack.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *