This tool protects artists from AI image generators
Robots will take over the jobs of humans. This is guaranteed. The usual assumption is that they will take over the manual labor, moving heavy pallets in warehouses and sorting for recycling. Now, major advances in generative artificial intelligence mean that robots are at the service of artists, too. AI-generated images, created with simple text prompts, are winning art competitions, decorate book coverand To promote “The Nutcracker,” Make human artists worry about their future.
This threat feels deeply personal. An image generator called Stable Diffusion is trained to recognize patterns, styles, and relationships by analyzing billions of images collected from the public internet along with text describing their content. Among the images it was trained on were the work of Greg Rutkowski, a Polish artist who specializes in fantasy scenes featuring dragons and magical creatures.Placing Mr. Rutkowski’s work with his name allowed the tool to learn his style so effectively that when Stable Diffusion was released to the public last year, his name become shorthand For users who want to generate dreamy, fantasy images.
one artist Notice the whimsical AI selfie from viral app Lensa ghostly signature In them, mimic what the AI learns from the data it was trained on: the artists who made the portraits sign their work. “These databases were created without the artist’s consent and permission,” Mr Rutkowski said. Since Dynamo’s inception, Mr. Rutkowski said he has received far fewer requests from first-time authors for covers for their fantasy novels. Meanwhile, Stability AI, the company behind Stable Diffusion, recently raised $101 million from investors and is now valued at more than $1 billion.
“Artists are afraid to release new art,” said Zhao Ben, a professor of computer science. Putting art online is how many artists advertise their services, but now they are “afraid to feed this monster that is becoming more and more like them,” Prof Zhao said. “It shuts down their business model.”
This led Professor Zhao and a team of computer science researchers at the University of Chicago to design a system called glaze Designed to prevent AI models from learning the style of a particular artist.Design the tools they plan to make available for downloadthe researchers surveyed more than 1,100 artists and worked closely with San Francisco-based illustrator and artist Karla Ortiz.
For example, Ms. Ortiz wants to publish her new work online, but doesn’t want it to be stolen by AI. She can upload digital versions of her work to Glaze and choose a different type of art than her own, such as abstraction. The tool then makes changes to Ms. Ortiz’s artwork at the pixel level, which Stable Diffusion associates with, say, Jackson Pollock’s splattered paint blobs. To the human eye, the Glazed images still look like her work, but the computer learning model will find something very different. It is similar to a tool previously created by the University of Chicago team to protect photos from facial recognition systems.
When Ms. Ortiz posts her Glazed work online, an image generator trained on those images won’t be able to mimic her work. A hint with her name instead leads to an image in some hybrid style of her work and Pollock’s.
“We are taking back our consent,” Ms. Ortiz said. AI-generating tools, many of which charge users to generate images, “have data that doesn’t belong to them,” she said. “The data is my work of art, and that’s my life. It feels like my identity.”
The UChicago team acknowledges that their tool doesn’t guarantee protection and could lead to countermeasures by anyone working to imitate a particular artist. “We are pragmatists,” says Professor Zhao. “We recognize that there may be a long delay before laws, regulations and policies catch up. This is to fill that gap.”
Many legal experts have compared the debate over the unfettered use of artists’ work to generate artificial intelligence to concerns about piracy in the early days of the internet, where services such as Napster allowed people to consume music without paying for it. Generative AI companies already face similar court challenges.Last month, Ms. Ortiz and two other artists file a class action lawsuit Claimed copyright and publicity violations in California against companies providing art creation services, including Stability AI.
“The allegations in this lawsuit represent a misunderstanding of how generative AI technology works and the laws surrounding copyright,” the company said in a statement. Stability AI also sued via Getty Images Copying millions of photos without permission. “We are reviewing these documents and will respond accordingly,” a company spokesman said.
Jeanne Fromer, a professor of intellectual property law at New York University, said the companies may have a strong fair use argument. “How do human artists learn to create art?” Professor Frommer said. “They’re copying stuff a lot, they’re consuming a lot of existing artwork, learning patterns and fragments of style, and then creating new artwork. So, at some level of abstraction, you could say that machines are learning to create art in the same way.”
At the same time, Professor Fromer said that the purpose of copyright law is to protect and encourage human creativity. “If we care about protecting a profession,“Or we think the making of art is important to who we are as a society, and we might want to protect artists,” she said. “
A non-profit organization called the Concept Art Association recently adopted to fund me Hire a lobbying firm to try to convince Congress to protect the intellectual property of artists. “We’re dealing with tech giants with unlimited budgets, but we’re confident that Congress will recognize protection Intellectual property is the right side of the debate.”
Raymond Ku, a professor of copyright law at Case Western University, predicts that art creators will not only collect artwork from the Internet, but eventually develop some kind of “private contract system that ensures some level of compensation to the creator.” “. In other words, when artists’ art is used to train artificial intelligence and inspire new images, they may receive a nominal payment, similar to the way musicians are paid by music streaming companies.
Andy Baio, an author and technologist, has Check training data Stable Diffusion says these services can mimic an artist’s style because they see the artist’s name and their work over and over again. “You can remove names from the dataset,” Mr. Baio said, to prevent the AI from explicitly learning the artist’s style.
A service seems to already do something along these lines.When Stability AI publishes a new version In November’s Stable Diffusion it has a notable change: the hint “Greg Rutkowsi” is no longer available for fetching images in his style, this is a development famous Moderated by Emad Mostaque, CEO of the company.
Fans of Stable Diffusion were disappointed. “What did you do to Greg,” someone wrote on the official Discord forum frequented by Mr. Mostak. He assured forum users that they could customize the model. “It won’t be too hard to train greg,” responded another.
Mr Rutkowski said he plans to start his Glazing work.