sourcegraph
May 29, 2023


Seeing is believing for a long time.photo is fake manipulative Almost as long as photography has been around.

Now, photos don’t even need reality to look real — just AI responding to cues. Even experts sometimes have a hard time judging whether a person is real or not. Can you?

The rapid emergence of artificial intelligence has sounded the alarm that the technology used to deceive people is advancing far faster than the technology that can recognize these tricks. Tech companies, researchers, photo agencies and news organizations are all scrambling to catch up, trying to establish standards for content provenance and ownership.

These advances have fueled disinformation and been used to stoke political divisions. Authoritarian governments create seemingly realistic news broadcasters to advance their political goals. Last month, some became obsessed with pictures of Pope Francis in a puffy Balenciaga jacket and an earthquake that devastated the Pacific Northwest, even though none of those events happened. These images were created using the popular image generator Midjourney.

As former President Donald J. Trump turned himself in to face criminal charges at the Manhattan District Attorney’s office on Tuesday, artificial intelligence-generated images appeared on Reddit showing actor Bill Murray in the White House. Another image showing Mr Trump marching in front of a large crowd with an American flag in the background was quickly retweeted without disclosing the original post’s information and noting that it was not actually a photo.

Experts fear the technology will hasten the erosion of trust in the media, government and society. How can we trust anything we see if any image can be fabricated and manipulated?

“The tools are going to get better, they’re going to get cheaper, and there will come a time when everything you see on the internet can’t be trusted,” said Wasim Khaled, CEO of Blackbird.AI, a client fighting disinformation.

Artificial intelligence can let almost anyone create complex artwork, like now exhibit At the Gagosian Museum in New York, or lifelike images that blur the lines between reality and fiction. Insert a text description, and the technology generates relevant images—no special skills required.

Often, there are indications that viral images were created by computers rather than captured in real life: the lavishly dressed pope, for example, with glasses that seem to melt on his cheeks and fuzzy fingers. AI art tools also often produce meaningless text. Here are some examples:

However, rapid advances in technology are eliminating many of these drawbacks. The latest version of Midjourney, released last month, is able to paint realistic-looking hands, a feat apparently elusive by earlier imaging tools.

Photos of Mr Trump’s “arrest” circulated on social media days before he turned himself in to face criminal charges in New York City. The photos were created by British journalist Elliot Higgins, founder of open source investigative group Bellingcat. He uses Midjourney to imagine the ex-president being arrested, put on trial, imprisoned in an orange jumpsuit and escaping through the sewers. He posted the images on Twitter, clearly marking them as creations. Since then, they have been widely shared.

These images are not meant to fool anyone. Instead, Mr Higgins hopes to draw attention to the tool’s powerful capabilities — even in its infancy.

Images of Midjourney were able to pass through the facial recognition program Bellingcat uses to verify identities, often of Russians who have committed crimes or other abuses, he said. It’s not hard to imagine governments or other miscreants creating images to harass or discredit their enemies.

At the same time, Mr Higgins said the tool had struggled to create convincing images of people who were not as well-visited as Mr Trump, such as Britain’s new Prime Minister Rishi Sunak or comedian Harry Hill, “They may not be known outside the UK.”

In any case, Midway was not amused. It suspended Mr Higgins’ account without explanation after the image went viral. The company did not respond to a request for comment.

The limitations of generating images make them relatively easy to detect by news organizations or others who understand the risks — at least for now.

Nonetheless, stock photo companies, government regulator and a music industry trade group Measures have been taken to protect its content from unauthorized use, but technology’s powerful ability to imitate and adapt complicates these efforts.

Some AI image generators can even reproduce images – a sickening homage to “Twin Peaks”; Will Smith eating a handful of spaghetti – with distorted versions used by companies like Getty Images or Shutterstock watermark.

In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argues that Stable Diffusion undercuts the value of Getty’s watermark by incorporating it into images that range from “weird to grotesque.”

Getty said the scale of the “brazen theft and free-riding” was “staggering.” Stability AI did not respond to a request for comment.

Getty’s lawsuit reflects concerns raised by many individual artists — that AI companies are becoming a competitive threat by copying content they do not have the right to use.

Trademark infringement has also become an issue: The artificially generated image replicates NBC’s peacock logo, albeit with unintelligible letters, and shows Coca-Cola’s familiar curved logo with an extra O-ring in the name.

In February, the US Copyright Office weighed human-generated imagery in evaluating the case of “Zarya of the Dawn,” an 18-page comic book written by Kristina Kashtanova with artwork generated by Midjourney. Government regulators decided to copyright the words of comic books, but not their art.

“Due to the large distance between what users may instruct Midjourney to create and the visual material Midjourney actually generates, Midjourney users lack sufficient control over the resulting imagery to be considered the ‘masterminds’ behind it,” the office said. explain in its decision.

Mickey H. Osterreicher, general counsel for the National Association of Press Photographers, said the threat to photographers is rapidly outpacing the development of legal protections. It will become increasingly difficult for newsrooms to verify content. He said social media users neglected to clearly identify the images as human-generated tags, choosing to believe they were real photos.

Generative AI could also make fake videos easier to create. This week, a video surfaced online in what appears to be author and generative artificial intelligence expert Nina Schick explaining how the technique creates “a world where shadows are mistaken for real things.” When the camera pulls back, Ms. Schick’s face glitches and a stand-in appears in her place.

The video explains that, with Ms. Schick’s agreement, the deepfake was created by Dutch company Revel.ai and California-based Truepic, which is exploring broader verification of digital content.

The companies describe their video as “the first digitally transparent deepfake,” which bears a flag indicating it was computer-generated. Data is cryptographically sealed within the file; tampering with the image breaks the digital signature and prevents credentials from appearing when using trusted software.

The companies hope that the badge, which commercial customers will pay for, will be adopted by other content creators to help create a standard of trust when it comes to AI imagery.

“The scale of this problem is going to grow so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, Truepic’s chief executive.

Truepic is part of the Content Provenance and Authenticity Alliance, a project established through alliances with companies including Adobe, Intel and Microsoft to better trace the provenance of digital media.Chipmaker Nvidia said last month It is working with Getty to help train “responsible” AI models using Getty’s licensed content and pay royalties to artists.

On the same day, Adobe launched its own image generation product, Firefly, which will only use images licensed or from its own inventory or no longer copyrighted for training. Dana Rao, Chief Trust Officer of the Company, on its website it says The tool automatically adds content credentials — “like a nutrition label for imaging” — identifying how the image was made. Adobe said it also plans to compensate contributors.

Last month, model Chrissy Teigen wrote on twitter Blinded by the pope’s puffy jacket, she added “there’s no way I’m surviving the tech future.”

last week, a A series of new AI images Show the pope, back in his usual robes, enjoying a mug of beer. The hands looked mostly normal – except for the Pope’s wedding ring on his ring finger.

Additional production by Jeanne Noonan DelMundo, Aaron Krolick and Michael Andre.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *