This June, in the political battle leading up to the 2024 US presidential primaries, a series of images were released showing Donald Trump embracing one of his former medical advisers, Anthony Fauci. In a few of the shots, Trump is captured awkwardly kissing the face of Fauci, a health official reviled by some US conservatives for promoting masking and vaccines during the COVID-19 pandemic.
“It was obvious” that they were fakes, says Hany Farid, a computer scientist at the University of California, Berkeley, and one of many specialists who examined the pictures. On close inspection of three of the photos, Trump’s hair is strangely blurred, the text in the background is nonsensical, the arms and hands are unnaturally placed and the details of Trump’s visible ear are not right. All are hallmarks — for now — of generative artificial intelligence (AI), also called synthetic AI.
Science and the new age of AI: a Nature special
Such deepfake images and videos, made by text-to-image generators powered by ‘deep learning’ AI, are now rife. Although fraudsters have long used deception to make a profit, sway opinions or start a war, the speed and ease with which huge volumes of viscerally convincing fakes can now be created and spread — paired with a lack of public awareness — is a growing threat. “People are not used to generative technology. It’s not like it evolved gradually; it was like ‘boom’, all of a sudden it’s here. So, you don’t have that level of scepticism that you would need,” says Cynthia Rudin, an AI computer scientist at Duke University in Durham, North Carolina.
Dozens of systems are now available for unsophisticated users to generate almost any content for any purpose, whether that’s creating deepfake Tom Cruise videos on Tik Tok for entertainment; bringing back the likeness of a school-shooting victim to create a video advocating gun regulation; or faking a call for help from a loved one to scam victims out of tens of thousands of dollars. Deepfake videos can be generated in real time on a live video call. Earlier this year, Jerome Powell, chair of the US Federal Reserve, had a video conversation with someone he thought was Ukrainian President Volodymyr Zelenskyy, but wasn’t.
The quantity of AI-generated content is unknown, but it is thought to be exploding. Academics commonly quote an estimate that around 90% of all Internet content could be synthetic within a few years1. “Everything else would just get drowned out by this noise,” says Rudin, which would make it hard to find genuine, useful content. Search engines and social media will just amplify misinformation, she adds. “We’ve been recommending and circulating all this crap. And now we’re going to be generating crap.”
Although a lot of synthetic media is made for entertainment and fun, such as the viral image of Pope Francis wearing a designer puffer jacket, some is agenda-driven and some malicious — including vast amounts of non-consensual pornography, in which someone’s face is transposed onto another body. Even a single synthetic file can make waves: an AI-generated image of an explosion at the US Pentagon that went viral in May, for example, caused the stock market to dip briefly. The existence of synthetic content also allows bad actors to brush off real evidence of misbehaviour by simply claiming that it is fake.
“People’s ability to really know where they should place their trust is falling away. And that’s a real problem for democracy,” says psychologist Sophie Nightingale at Lancaster University, UK, who studies the effects of generative AI. “We need to act on that, and quite quickly. It’s already a huge threat.” She adds that this issue will be a big one in the coming year or two, with major elections planned in the United States, Russia and the United Kingdom.
AI-generated fakes could also have huge impacts on science, say some experts. They worry that the rapidly developing abilities of generative AI systems could make it easier for unscrupulous researchers to publish fraudulent data and images (see ‘Scammed science’ at the end of this article).
For now, some synthetic content contains give-away clues — such as images featuring people with six fingers on one hand. But generative AI is getting better every day. “We’re talking about months” until people can’t tell the difference with the naked eye, says Wael Abd-Almageed, an information scientist and computer engineer at the University of Southern California in Los Angeles.
All of this has researchers scrambling to work out how to harness the deepfake powers of AI for good, while developing tools to guard against the bad. There are two prongs of technological defence: proactively tagging real or fake content when it is generated; and using detectors to catch fakes after publication. Neither is a perfect solution, but both help by adding hurdles to fakery, says Shyam Sundar, a psychologist and founder of the Media Effects Research Laboratory at Pennsylvania State University in University Park. “If you’re a dedicated malicious actor, you can certainly go quite far. The idea is to make it difficult for them,” he says.
Technology will be crucial in the short term, says Nightingale, but “then longer term, maybe we can think more about education, regulation.” The European Union is leading the way globally with its AI Act, which was passed by the parliament this June and is awaiting decisions by the two other branches of the EU government. “We’re going to learn important lessons from it for sure,” says Nightingale, “whether they get it right or not.”
Is this just fantasy?
For researchers, generative AI is a powerful tool. It is being used, for example, to create medical data sets that are free of privacy concerns, to help design medicinal molecules and to improve scientific manuscripts and software. Deepfakes are being investigated for their use in anonymising participants of video-based group therapy; creating custom avatars of physicians or teachers that are more compelling for viewers; or allowing for improved control conditions in social-science studies2. “I’m more hopeful than concerned,” says Sundar. “I think it’s transformative as a technology.”
But with the spectre of rampant misuse, researchers and ethicists have attempted to lay down rules for AI, including the 2018 Montreal Declaration for the Responsible Development of Artificial Intelligence and the 2019 Recommendation on Artificial Intelligence from the Organisation for Economic Co-operation and Development. An initiative called the Partnership on AI, a non-profit organization that includes major industry partners, fosters dialogue on best practices — although some observers and participants have questioned whether it has had any impact.
AI and science: what 1,600 researchers think
All advocate for the principles of transparency and disclosure of synthetic content. Companies are picking that up: in March, for example, TikTok updated its community guidelines to make it mandatory for creators to disclose use of AI in any realistic-looking scene. In July, seven leading technology companies — including Meta, Microsoft, Google, OpenAI and Amazon — made voluntary commitments to the White House to mark their AI-generated content. And in September, Google declared that starting in mid-November, any AI-generated content used in political ads will have to be declared on its platforms, including YouTube.
One way to tag synthetic images is to watermark them by altering the pixels in a distinctive way that’s imperceptible to the eye but notable on analysis. Tweaking every nth pixel so that its colour value is an even number, for example, would create a watermark — but a simple one that would disappear after almost any image manipulation, such as applying a colour filter. Some watermarks have been criticized for being too easy to remove. But deeper watermarks can, for instance, insert a wave of dark-to-light shading from one side of an image to the other and layer it on top of several more such patterns, in a way that can’t be wiped away by fiddling with individual pixels. These watermarks can be difficult (but not impossible) to remove, says Farid. In August, Google released a watermark for synthetic images, called SynthID, without revealing details about how it works; it’s unclear yet how robust it is, says Farid.
The companion idea to watermarking is to tag a file’s metadata with secure provenance information. For photography, such systems start when a photo is taken, with software on the camera device that ensures that an image’s GPS and time stamps are legitimate, and that the image isn’t a photo of another photo, for example. Insurance underwriters use such systems to validate images of assets and damages, and the news agency Reuters has trialled authentication technology to validate photos of the war in Ukraine.
The Coalition for Content Provenance and Authenticity (C2PA), which brings together key industry groups in technology and publishing, released a first version of a set of technical specifications in 2022 for how systems should track provenance information for both synthetic and real imagery. Plenty of C2PA-compliant tools that embed, track and verify provenance data are now available, and many corporate commitments — such as Microsoft’s — say they will follow C2PA guidelines. “C2PA is going to be very important; it’s going to help,” says Anderson Rocha, an AI researcher at the University of Campinas in Brazil.
Detection detectives
Systems that track image provenance should become the workhorse for cutting down the sheer number of dubious files, says Farid, who is on the C2PA steering committee and is a paid consultant for Truepic, a company in San Diego, California, that sells software for tracking authentic photos and videos. But this relies on ‘good actors’ signing up to a scheme such as C2PA, and “things will slip through the cracks”, he says. That makes detectors a good complementary tool.
AI will transform science — now researchers must tame it
Academic labs and companies have produced many AI-based classifiers. These learn the patterns that can distinguish AI-made media from real photos, and many systems have reported that they can spot fakes more than 90% of the time, while falsely identifying real images as fakes only 1% or less of the time. But these systems can often be defeated3. A bad actor can tweak images so that the detector is more likely to be wrong than right, says Farid.
AI-based tools can be paired with other techniques that lean on human insights to unravel the fake from the real. Farid looks for clues such as lines of perspective that don’t follow the rules of physics. Other signs are more subtle. He and his colleagues found that facial profiles made by StyleGAN generators, for example, tend to place the eyes in the exact same position in the photo4, providing a hint as to which faces are fakes. Detectors can be given sophisticated algorithms that can, for example, read a clock somewhere in the photo and check to see whether the lighting in the image matches the recorded time of day. Tech company Intel’s FakeCatcher analyses videos by looking for expected colour changes in the face that arise from fluctuations in blood flow. Some detectors, says Rocha, look for distinctive noise patterns generated by light sensors in a camera, which so far aren’t well replicated by AI.
The battle between fake-makers and fake-detectives is fierce. Farid recalls a paper by his former student Siwei Lyu, now a computer scientist at the University at Buffalo, New York, that highlighted how some AI videos featured people whose two eyes blinked at different rates5. Generators fixed that problem in weeks, he says. For this reason, even though Farid’s lab publishes the vast majority of its work, he releases code only on a case-by-case basis to academics who request it. Abd-Almageed takes a similar approach. “If we release our tool to the public, people will make their own generation methods even more sophisticated,” he says.
Several detection services that have public user interfaces have sprung up, and many academic labs are on the case, including the DeFake project at the Rochester Institute of Technology in New York and the University at Buffalo’s DeepFake-o-meter. And the US Defense Advanced Research Projects Agency (DARPA) launched its Semantic Forensics (SemaFor) project in 2021, with a broad remit of unearthing the who, what, why and how behind any generative file. A team of nearly 100 academics and corporate researchers have worked together under SemaFor to create more than 150 analytics, says the project’s head, Wil Corvey. The bulk are detection algorithms that can be used in isolation or together.
AI tools as science policy advisers? The potential and the pitfalls
Because there are a huge number of both generators and detectors, and every case is different, reported accuracy rates vary wildly. And the arms race between them means that the situation is constantly changing. But for many media types, current success rates are poor. For generated text, a review this year of 14 detection tools found that all were “neither accurate nor reliable”6. For video, a high-profile competition in 2020 was won by a system that was only about 65% accurate3 (see also go.nature.com/3jvevoc). For images, Rocha says that if the generator is well known, detectors can easily be more than 95% accurate; but if the generator is new or unknown, success rates typically plummet. Using multiple detectors on the same image can increase the success rate, says Corvey.
He adds that detecting whether something is synthetic is only one part of the puzzle: as more users rely on AI to tweak their content, the more important question is not ‘how much of this is synthetic?’ but rather ‘why was this made?’, he says. For this reason, an important part of SemaFor’s work is to determine the intent behind fakes, by attributing the media to a creator and characterizing its meaning. A parallel DARPA project, the Influence Campaign Awareness and Sensemaking (INCAS), is attempting to develop automated tools to detect the signals of mass misinformation campaigns that might or might not be supported by AI fakery.
The social network
SemaFor is now in the third and final stage of its project, in which Corvey is focusing on reaching out to potential users such as social-media sites. “We have outreach to a number of companies including Google. To our knowledge, none have taken or are running our algorithms on a constant basis on-site,” he says. Meta has collaborated with researchers at Michigan State University in East Lansing on detectors, but hasn’t said how it might use them. Farid works with the employment-focused platform LinkedIn, which uses AI-based detectors to help weed out synthetic faces that support fraudulent accounts.
Abd-Almageed is in favour of social-media sites running detectors on all images on their sites, perhaps publishing a warning label on images flagged with a high percentage chance of being fake. But he had no luck when he discussed this a couple of years ago with a company that he would not name. “I told a social network platform, take my software and use it, take it for free. And they said, if you can’t show me how to make money, we don’t care,” says Abd-Almageed. Farid argues, however, that automated detectors aren’t well suited to this kind of use: even a 99% accurate tool would be wrong one time out of 100, which he thinks would completely erode public confidence. Farid argues that detection should be targeted at intensive, human-led investigations of specific cases, rather than trying to police the entire Internet.
Many argue that companies such as publishers and social-media sites will need legislation to push them into responsible behaviour. In June, the European Parliament passed a draft law that would strictly regulate high-risk uses of AI and enforce disclosure of content generated by such tools. “The world is watching, because the EU has taken the lead on this,” says Nightingale. But experts disagree widely about the act’s merits and whether it might quash innovation. In the United States, a few AI bills are pending, including one aimed at preventing deepfakes of intimate images and one about the use of AI in political advertising, but neither is certain to pass.
There is one point that experts agree on: improving tech literacy will help to stop society and democracy from drowning in fakes. “We need to get the word out to make people aware of what’s happening,” says Rocha. “When they are informed about it, they can take action. They can demand education in schools.”
Even with all the technological and social tools at our disposal, Farid says, it’s a losing battle to stop or catch all fakery. “But it’s OK, because even in defeat I will have taken this out of the hands of the average person,” he says. Then, as with counterfeit money, it will still be possible to fool the world with generative AI fakes — but much harder.