On Tuesday, Congress held a hearing focused on the problem of digitally altered video and audio known as deepfakes.
House Intelligence Committee Chairman Adam Schiff opened the hearing by running through examples of the technology, adding that "one does not need any great imagination to envision even more nightmarish scenarios" as the 2020 election heats up and the technology becomes more advanced.
The panel heard from four experts on the problem. One of them, Professor Danielle Citron, summed up the panelists’ views by noting "there is no silver bullet" to the problem.
“The circulation of deepfakes has potentially explosive implications for individuals and society," Citron said in her written testimony. "Under assault will be reputations, political discourse, elections, journalism, national security, and truth as the foundation of democracy.”
But the group did propose some avenues for lawmakers and the legal community to consider.
The most direct way to combat the problem would be some sort of technology to detect deepfakes.
"it may be possible for large-scale technology platforms to try and develop and share tools for the detection of malicious and synthetic media,” Jack Clark, policy director at OpenAI, said.
His fellow panelists raised the question about how far the technology would be able to go. Dr. David Doermann, a professor of computer science, noted that "if history is any indicator, it's only a matter of time before the current detection capabilities will be rendered less effective."
Lawmakers were somewhat skeptical as to whether technology companies should be trusted to implement any solutions. Democrats repeatedly pointed to a recent doctored video of Rep. Nancy Pelosi that Facebook declined to take down. Rep. Devin Nunes, the ranking Republican on the committee, focused instead on what he says is systematic censorship against conservatives. Nunes argued that “most of the time, it’s conservatives who get banned.”
Citron, a professor of law at the University of Maryland, focused on legal measures that could be taken. "The law has a modest role to play" when it comes to legal recourse for people who see their images manipulated online, she said. Currently, Citron argues that “criminal law has too few levers for us to push."
During his questioning, Schiff followed up on this theme and mused whether tech companies need to be held liable. Companies like Facebook (FB) currently enjoy immunity, but Schiff wondered “if it’s time to do away with” these protections. Citron argued that the answer should be yes.
Whether or not Congress takes action, she noted that there are limits to any legal solutions. ”You have to be able to find the defendant to prosecute them and you've got to have jurisdiction over them," she said.
Using the intelligence community
Clint Watts, a research fellow at the Foreign Policy Research Institute, picked up on the jurisdiction problem.
"The U.S. government, from a national security perspective, should maintain intelligence on adversaries’ capabilities in deploying deepfake content or the proxies they employ to conduct such information," Watts said. He also suggested that the State and Defense departments develop plans to combat deepfake attacks from enemies of the U.S.
Watts pointed to Russia as "an enduring conveyer of disinformation," and noted that China's capabilities are growing and could eventually be a bigger conveyer of deepfakes aimed at the United States.
A final approach – and one the expert panelists repeatedly mentioned – is education for social media users. "We need to get the tools and the processes to individuals," Doermann said, including ways to detect and report fake content.
But both panelists and lawmakers repeatedly noted that what makes deepfakes so dangerous is that people tend to believe them whether or not they are told better.
"We tend to believe what our eyes and ears our telling us," Citron said. "We also tend to believe and tend to share information when it confirms our biases."
Ben Werschkul is a producer at Yahoo Finance based in Washington, DC.