U.S. Markets closed

'It's kind of the Wild West': Media gears up for onslaught of deepfakes

By Michael Calderone

A video of Republican Mitt Romney saying in 2012 that 47 percent of Americans were dependent on the government helped sink his presidential bid, and an unearthed clip of President Donald Trump bragging about grabbing women shook up the 2016 campaign.

In 2020, the race could again be rattled by video that emerges of candidates — but this time, media organizations are worried about being able to tell if it’s real.

News organizations are taking steps to tackle the problem of deepfakes, videos created through artificial intelligence technology to appear to show someone saying or doing things that never occurred. A more low-tech doctored clip, or “cheapfake,” of House Speaker Nancy Pelosi last month that tried to make her appear drunk spotlighted the growing challenge of combating misinformation this election season.

In an effort to prevent questionable clips from duping reporters, Reuters created its own deepfakes as a training exercise to see if journalists could tell they weren’t real. The Wall Street Journal’s ethics & standards and research & development teams launched a committee last fall to tackle the problem of doctored video, studying forensic technologies for identifying fakes and asking journalists to flag suspicious content.

And on Tuesday, the Washington Post will launch a public-facing “Fact Checker's Guide to Manipulated Video,” which will try to help voters spot misleading material by classifying videos into three categories: “Missing Context,” “Deceptive Editing,” and “Malicious Transformation,” which includes deepfakes.

The Washington Post’s Glenn Kessler said in an interview that his concern is “extremely high” that manipulated videos could be used to mislead the public.

“You’re just waiting for that kind of bomb to explode,” Kessler told POLITICO. “So, we’re trying to get ahead of these things.”

Francesco Marconi, the Journal’s research and development chief, told POLITICO that media organizations will likely struggle to stay on top of what’s real and what’s fake in 2020. Some methods of spotting fakes already appear obsolete as the technology to make them has progressed. For instance, people in early deepfakes didn’t blink; now they can. And once-blurry backgrounds are crisper.

“It’s a cat and mouse game,” he said.

So-called fake news permeated the 2016 campaign, some of it spread intentionally by Russian-sponsored social media trolls as part of an effort to disrupt the election, special counsel Robert Mueller found. But advances in video editing and artificial intelligence software have made it even easier to create counterfeit clips.

Complicating matters, major tech companies haven’t adopted consistent standards for dealing with such false material.

YouTube removed the “drunk” Pelosi video last month — which had been slowed to make it appear she was slurring her words — while Facebook allowed it to stay up. Recently, Facebook-owned Instagram opted not to remove a deepfake video that purported to show Facebook chief Mark Zuckerberg bragging about controlling “stolen data.”

Instagram head Adam Mosseri acknowledged Tuesday on “CBS This Morning” that “we don’t have a policy against deepfakes currently.”

Two academics wrote an essay for Harvard University’s Nieman Lab describing seven hypothetical scenarios in which manipulated video and audio could disrupt the 2020 election and even cast doubt on the democratic process itself. The scenarios ranged from relatively benign, such as supporters doctoring video to boost a candidate’s record, to more destabilizing, such as suppressing votes by telling Americans that fake videos of them engaged in incriminating behaviors will be released if they go to the polls.

The Post’s editorial board recently urged the government “to invest in developing technology to detect deepfakes.” And fears about this confusing new world prompted the first Congressional hearing on the matter this month.

“Thinking ahead to 2020 and beyond, one does not need any great imagination to envision even more nightmarish scenarios that would leave the government, the media, and the public struggling to discern what is real and what is fake," said House Intelligence Chairman Adam Schiff (D-Calif.).

Hany Farid, a University of California, Berkeley professor and digital forensics expert, demonstrated software last week on CBS that he’s creating to detect altered videos and which he said could eventually be used by the news media.

Farid told POLITICO he isn’t currently “working with any specific news organizations,” but “as we roll out our analysis tools, we hope to begin to work with a range of organizations.”

The WSJ’s Marconi told POLITICO that “by 2020, there will be massive proliferation” of deepfakes, so the paper wanted to be proactive in addressing them. He said the Journal’s committee serves as a newsroom resource in providing training, webinars and arranging guest speakers on the topic.

"This is an issue the entire newsroom takes very seriously,” Washington bureau chief Paul Beckett said in a statement. “All of our reporters covering the 2020 campaigns are being trained to be aware of the potential for deep fakes."

And Kessler and Nadine Ajaka, a senior video producer at the Post who is working with Kessler’s team, said they hope the Post’s classification system will prompt major platforms like YouTube to alert viewers when videos are found to have been manipulated.

“When you name something, it’s less terrifying,” said Ajaka. The classification system, she added, is a step toward giving the public a greater understanding of “a world in which you can’t trust everything you see.”

“Right now, it’s kind of the Wild West,” she said.