April 24, 2024

Artist Stephanie Dinkins has long been a pioneer in combining art and technology in her Brooklyn practice. In May, she was awarded $100,000 Guggenheim Museum For her breakthrough innovations, including a series of interviews with the humanoid robot Bina48.

For the past seven years, she has been experimenting with artificial intelligence’s ability to realistically depict black women smiling and crying using various text prompts. The first result was dull, if not shocking: her algorithm produced a humanoid creature in shades of pink shrouded in a black cloak.

“I was expecting something more like a black woman,” she said. Although the technique has improved since the first experiment, Dinkins found herself using escape terms in text prompts to help the AI ​​image generator achieve the image she wanted, “to give the machine a chance to give I want what I want.” But whether she uses the words “African-American woman” or “Black woman,” machine distortions occur frequently, destroying facial features and hair texture.

“The improvements mask some of the deeper questions we should be asking about discrimination,” Dinkins said. The black artist added: “Bias is deeply embedded in these systems, so it becomes ingrained and automated. If I work in a system that uses an algorithmic ecosystem, then I want that system to understand in a nuanced way Who Black people are so that we can feel better supported.”

She’s not alone in asking poignant questions about the disturbing relationship between AI and race. Many black artists have found evidence of racial bias in artificial intelligence, both in the large data sets that teach machines how to generate images, and in the underlying programs that run algorithms. In some cases, AI techniques appear to ignore or distort textual cues from artists, affecting how black people are depicted in images, while in other cases they appear to stereotype or censor black history and culture.

Discussion of racial bias in artificial intelligence has proliferated in recent years, with studies suggesting facial recognition technology and digital assistants struggle to recognize images and voice patterns of non-white people. These studies raise broader questions of fairness and bias.

The major companies behind AI image generators — including OpenAI, Stability AI, and Midjourney — have pledged to improve their tools. “Bias is an important industry-wide issue,” OpenAI spokesman Alex Beck said in an email interview, adding that the company is continually working to “improve performance, reduce bias, and reduce harmful output.” “. She declined to say how many employees are addressing racial bias or how much money the company has allocated to addressing it.

“Black people are used to not being seen,” Senegalese artist Linda Dunia Rebez In the introduction to her exhibition “In/Visible,” she wrote, Wild Archives, an NFT marketplace. “We’re used to being misrepresented when we’re seen.”

To make his point in an interview with reporters, Rebeiz, 28, asked about OpenAI’s image generator, Dar-E 2, Imagine the buildings in her hometown of Dakar. The algorithm produced arid desert landscapes and destroyed buildings that Rebez said were nothing like the coastal homes of the Senegalese capital.

“It’s demoralizing,” Rebez said. “The algorithm is biased toward images of African culture created by the West. It acquiesces to the worst stereotypes that already exist on the internet.”

Last year, OpenAI explain It is building new technology to diversify the images produced by DALL-E 2 so that the tool “generates images of people that more accurately reflect the diversity of the world’s population.”

An artist in the Rebez exhibition, Minny Aterou is a Ph.D. Candidate at Teachers College, Columbia University, plans to use Image Generator with young students of color in the South Bronx. But she is now concerned that “it might lead to offensive images for students,” Atelu explained.

Included in the Wild Archives exhibit are images from her “Blonde Braid Study,” which explores the limitations of Midjourney’s algorithm in generating images of naturally blonde black women. When the artist asked for an image of a blond-haired black identical twin, the program produced a lighter-skinned sibling instead.

“This tells us where the algorithm is pooling the images from,” Ateirou said. “It’s not necessarily drawn from a black corpus, but against a white corpus.”

She said she was concerned that young black children might try to generate images of themselves and see children with lightened skin. Atairu recalls some of her previous experiments with Midjourney until a recent update improved its capabilities. “It generates images like blackface,” she said. “You’ll see a nose, but it’s not a human nose. It looks like a dog’s nose.”

“If anyone sees a problem with our system, we ask them to send us a specific example so we can investigate,” Midjourney founder David Holz said in an email in response to a request for comment.

Stability AI, which provides image generation services, said it plans to work with the AI ​​industry to improve bias assessment techniques for more national and cultural diversity. The AI ​​firm said the bias was caused by “overrepresentation” in its general dataset, though it didn’t specify whether the overrepresentation of white people was the problem here.

Earlier this month, Bloomberg analyzed Researchers studied more than 5,000 images generated by Stability AI and found that its program amplified racial and gender stereotypes, often portraying light-skinned people as high-paying jobs while darker-skinned people Labels for “dishwasher” and “housekeeper”.

These problems haven’t stopped the investment spree in the tech sector. A recent optimistic report by consulting firm McKinsey predicted that generative AI would add $4.4 trillion to the global economy annually. Nearly 3,200 startups raised $52.1 billion in funding last year, according to to the GlobalData Deals database.

Tech companies have grappled with allegations of bias against depictions of darker skin since the early days of color photography in the 1950s, when companies like Kodak used White in their color development. Eight years ago, Google disabled the ability of its artificial intelligence program to let people search for gorillas and monkeys through its Photos app after the algorithm incorrectly classified black people into those categories. As of May this year, the issue remains unresolved. Two former employees who worked on the technology told The New York Times that Google did not use enough images of black people to train its artificial intelligence systems.

Other experts in artificial intelligence say bias is more serious than datasets, pointing to the technology’s early development in the 1960s.

“The problem is more complex than data bias,” says Dartmouth College cultural historian and author of a recent book on The Birth of Computer Vision. According to his research, in the early days of machine learning, there was little discussion of race, and most scientists working on the technology were white.

“It’s hard to separate today’s algorithms from history because engineers are building on those previous versions,” Dobson said.

To reduce the appearance of racial bias and hateful imagery, some companies ban certain words, such as “slave” and “fascist,” from text prompts users submit to the generator.

But companies looking to find simple solutions, such as vetting the types of tips users can submit, are avoiding more fundamental problems of bias in the underlying technology, Dobson said.

“It’s a worrying time as these algorithms become more sophisticated. When you see garbage coming out, you have to wonder what kind of garbage process is still going on within the model,” the professor added.

Oriah Harveyan artist recently exhibited at the Whitney Museum exhibition “Reframing” about digital identity encountered these prohibitions in a recent project using Midjourney. “I wanted to ask what the database knew about slave ships,” she said. “I got a message that my account would be suspended if I continued.”

Dinkins ran into a similar problem with the NFTs she created and sold showing how slaves and settlers brought okra to North America. She was censored when she tried to use the generator, copy, to take pictures of slave ships. She eventually learns to use the term “corsair” to outwit the censors. The images she received were close to what she wanted, but it also posed some uncomfortable questions for the artist.

“How has this technology affected history?” Dinkins asked. “You can see people trying to correct prejudice, but at the same time erasing a piece of history. I find these deletions as dangerous as any prejudice, because we just forget how we got here.”

Guggenheim chief curator Naomi Beckwith credits Dinkins’ nuanced approach to representational and technical issues as one of the reasons the artist won the museum’s first Art and Technology Award one.

“Stephanie has become part of a tradition of artists and cultural workers who find holes in these overarching theories about how things work,” Beckwith said. The curator added that her initial paranoia about artificial intelligence programs replacing human creativity was greatly reduced when she realized that these algorithms knew next to nothing about black culture.

But Dinkins isn’t ready to give up on the technology just yet. She continues to use it for her art projects — with skepticism. “Once the system can generate really high-fidelity images of black women crying or smiling, can we rest?”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *