What's Next: Why Lawyers Need to Care About Deepfakes + AI and Zoning Law (It’s Going to Be a Thing)

Welcome back to What’s Next, where we report on the intersection of law and technology. Today, we talk with Stanford's Riana Pfefferkorn about deepfakes and why lawyers need to care about this alarming issue. Also, autonomous vehicles could affect our zoning laws (think fewer parking garages). More on that random legal aspect, and more, below.

 



If you follow technology, it’s likely you’re in a panic over deepfakes—altered videos that employ artificial intelligence and are nearly impossible to detect. Or else you’re over it already. For lawyers, a better course may lie somewhere in between. We asked Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford Law School’s Center for Internet and Society, to explain (sans the alarmist rhetoric) why deepfakes should probably be on your radar.

How long have you been focused on the phenomenon of deepfake videos? What is it about deepfakes that most interests you?

I first got interested in deepfakes in the spring of 2018, when I was co-teaching a course on cybersecurity law and policy at Stanford. The other two instructors were a professor of computer science named Dan Boneh and a fellow at the Hoover Institution, Andrew Grotto, who used to be a top cybersecurity policy official at the White House. The two of them had been working on a paper together about deepfakes, and they talked about that during our final session of class.

That piqued my interest, because deepfakes get at a concept that is very important in both encryption and cybersecurity more generally: authentication. How can I trust that the person I think I'm chatting with over a messaging app is in fact that person and not an interloper? How can I trust that a piece of information I'm retrieving from a database is accurate and correct, and wasn't tampered with at some point before I pulled it? And what kind of proof will satisfy me, depending on the context? That will be different if I'm acting as a fact-finder in a criminal case in court, versus if I'm just casually texting with a friend.

In an upcoming issue of NWLawyer, you write about the evidentiary issues that will emerge if deepfake videos make their way into court cases. What led you to consider those implications?

At the same time as I was co-teaching this class last spring, there were two new additions to Federal Rule of Evidence 902: FRE 902(13) and (14), which had gone into effect at the end of 2017. They're both about that same concept, authentication—specifically, of electronically-stored information (ESI). And by and large I think those amendments are going to do a lot to streamline the admission of electronic evidence. But Professor Boneh taught the class that methods of digitally authenticating videos, such as through watermarking or cryptographic signatures, are still susceptible to manipulation. So, when I got up to talk about Rules 902(13) and (14), I cautioned the students that these new amendments will not be a magic bullet to keep deepfakes from creeping into evidence.

Before coming to Stanford, I spent several years as a litigation associate at a large law firm, and before that I clerked for a magistrate judge. So even though I'm now in this more academic role, I'm still interested in the nuts and bolts of pre-trial and trial practice. I find procedural issues fascinating. And that's what really grabbed me about deepfakes: How are we going to deal with them when parties start trying to introduce them into evidence in court? Are the rules courts have developed over the years to guard against forged or tampered evidence, from handwritten documents to digital photographs, going to hold up? Or are we going to need new rules?

You point to a number of challenges for courts “in the not-too distant future” if deepfake videos make their way into litigation. How fast is that future approaching and do you think litigators and judges are sufficiently alert to these issues?

The verisimilitude of AI-generated photo and video seems to be growing by leaps and bounds on a near-daily basis. AI is now capable of generating fake human faces in which I for one cannot detect any tell-tale signs that it's not a real photo. So I think that future is probably coming, maybe not this year, but definitely in the next couple of years. The tools for making deepfakes are becoming ever more sophisticated. Right now the most impressive AI-generated images and video are coming from academic teams and teams at big companies -- places with deep resources to devote to AI. But that progress will also trickle down rapidly to the deepfake tools that are available to anyone on the Internet to use for free. So the abusive applications of deepfakes, which are already happening, are only going to ramp up.

First we will probably see litigation about deepfakes, where the plaintiff is someone who's been victimized by a video purporting to depict her saying or doing something she didn't do, and she's trying to recover under some tort theory. But with the advances in free, readily-available tools, I think in the next couple years we'll also see deepfakes creeping into run-of-the-mill cases. In those cases, the deepfake won't be the basis of the cause of action, it'll be just another piece of evidence in the case. That's been true of social media: evidence from social media now plays a part in a wide array of cases, not just cases that are about social media platforms (e.g. cyberbullying). I think it will be true of deepfakes too.

I definitely don't think litigators and judges are thinking about these issues sufficiently yet. But we should be getting ready, while we still have a little lead time before deepfakes start cropping up everywhere. That's where I'm planning to go next in my work on deepfakes: developing practical guidelines and suggestions for how courts should go about the task of rooting out deepfake evidence, what the dos and don'ts are for litigators as they're collecting evidence for their case, and maybe also the role of expert witnesses. Experts are yet another part of the picture of deepfakes in the courtroom. This is such a cutting-edge issue that there are only a few people who right now are qualified enough to give expert opinions as to whether or not something is a deepfake. If deepfakes come up in enough cases, that handful of individuals are going to be in very high demand. So in addition to the need for lawyers and judges to prepare, I also foresee an issue with the expert pipeline.

Deepfakes in the courtroom is such a big topic. There's a lot of work to do.

With the rise of deepfakes, do you see a risk that jurors may discount authentic video and audio evidence?



Yes. Once a video has been authenticated and the court has admitted it into evidence, it's for the jury to decide how much weight to give it, and the opposing party may make arguments to try to minimize its weight. I think juries may be more easily persuaded by such arguments now, in the age of "fake news," than they might have been in the past, thanks to public awareness of the deepfakes phenomenon.

We might even see a kind of "reverse CSI effect," where juries may expect the proponent of a piece of video or audio evidence to employ a lot of high-tech bells and whistles to persuade them that real evidence is not fake, even after it's been admitted. But that's expensive and time-consuming, and that shouldn't be what it takes to get juries to keep believing what's real is real. Right now in my research on this topic, I'm thinking through other options, such as whether the proponent could ask for a jury instruction (and expect it to be heeded).

With that said, the public has also been aware of other kinds of fakery, such as forged signatures and Photoshopped images, and those didn't lead to total nihilism by juries that it's possible to know what's real. So my hope is that both judges and juries will take deepfakes in stride. Time will tell.

Aside from the litigation context, where do you anticipate that practicing lawyers may encounter deepfakes?

Wherever videos come into play, that's a chance for deepfakes to become an issue. And that means a range of practice areas. For example, say you are an M&A lawyer doing due diligence on a possible deal between your client and another company. If a fake video surfaces that seems to show the company's CEO making racist or sexist remarks, or stating that the company's marquee product does not work as well as advertised, that could influence your client's decision about whether to go forward with the deal. For corporations, the well-timed release of a deepfake video could mess with a lot of business dealings, attract regulator attention, hinder investment and recruiting, and anger shareholders.

The problems aren't limited to the corporate context. In employment matters, a deepfake video might lead to an employee's termination, or cause a job candidate not to be hired. Think, too, of matters of death and incapacity. In a will contest, for example, a deepfake video might be used to persuade the probate court that the decedent was, or was not, of sound mind at the time of the will signing. Or, someone might fall into a persistent vegetative state without having an advance health care directive in place. If there is a dispute among her loved ones about what her wishes would have been, a fake video might affect the dispute by supposedly depicting her talking about what she wanted.

We can foresee a range of legal settings in which deepfakes might come up, and they won't be limited to the litigation context. That means attorneys of all stripes need to be thinking about the role video recordings play in their practice and how deepfakes might affect that.

—Vanessa Blum




 

Zoning Law and Artificial Intelligence: It’s Going to Be a Thing. Really



Autonomous vehicles are poised to transform life, especially in major cities. But most of the focus has been on issues surrounding transportation flow, the environment and insurance. When they arrive en masse, though, self-driving cars will also transform public policy in a way that many lay people—or even attorneys—might not expect.

Eric Tanenblatt, the global chair of public policy and regulation at Dentons, recently sat down for an interview with Legaltech News, detailing the multitude of ways autonomous vehicles will be transformative. One way surprised us, though: zoning laws. Don’t see the connection? Well, think about the fact that autonomous vehicle fleets, like Ubers and Lyfts right now, will be expected to constantly be moving, Tanenblatt said. Then add in that many people will no longer own their own cars, relying on these constantly-moving fleets.



“What that means is that there won’t be the need for parking decks and parking garages, parking lots to the extent we have them now. That frees up the space for more economic development or green space, that’s going to require local governments to change some of their zoning laws,” he explained. He also later added, “There may be new requirements where you need to add drop off and pick up access, because there will be so many vehicles driving around.”

It’s a strange thing to think about—parking lots are a staple of American life at this point. Singing, “they paved paradise and put up a Waymo drop off point” doesn’t quite have the same ring to it. And yet, these are the potential realities of the new AV age.

Some may say that day is far in the future, given the way many regulatory bodies—local and national alike—are slow to react. But, Tanenblatt explained, that’s no reason to ignore the possibility for now. “It’s not slowing private industry down, and I don’t foresee it slowing down in the near term,” he said. “What’s going to happen is the government is going to need to catch up. And until the federal government does, it’s in every company’s best interest to understand what the rules are in each local jurisdiction.”

—Zach Warren




 

3 Things Labor and Employment Lawyers Should Know About Using AI in Hiring



Kelly Trindel is head of industrial organizational science and diversity analytics at pymetrics, Inc., where she helps the company proactively test for hidden racial/ethnic and gender biases in assessment tools. The New York-based startup uses games based on cognitive neuroscience and artificial intelligence to help employers find the candidates who best fit their needs, while reducing gender and racial biases.

Before joining the company in 2018, Trindel, who has a Ph.D in experimental psychology, was chief analyst and research director at the U.S. Equal Employment Opportunity Commission in Washington, D.C. While there, she provided statistical analytical support to the commission’s discrimination investigations and case development for nearly eight years during the Obama administration.

Pymetrics’s cloud-based assessment tools are being used by Tesla, LinkedIn and Unilever, among others. The company raised $40 million in venture capital funding last fall.

Trindel spoke with us recently about what labor and employment lawyers need to know about the use of artificial intelligence and machine-learning in recruitment and hiring. Her remarks are edited for brevity and clarity.

Assessment and hiring tools must comply with federal Uniform Guidelines on Employee Selection Procedures, under Title VII of the Civil Rights Act, which have been around in some form since 1978. My message to labor lawyers is that the guidelines are still relevant. Old regulations still matter, and labor lawyers should be asking vendors of tools like pymetrics whether, and how they comply with employee selection procedures. If I were still at the EEOC, and investigating the use of a tool of pymetrics or other companies, I would be looking at the uniform guidelines.

Artificial intelligence offers opportunities to improve fairness and validity of assessment tools. AI tools give us new ways of de-biasing. Before we go live with a model at pymetrics, we test it with a group of people that we call a bias set, people who have played our games and voluntarily given us their race, ethnicity and gender. In this way, we can see if there is a significant difference in performance by a demographic group prior to going live, and we can see if men outperform women, or whites outperform Hispanics, and if there is a significant difference, and if so, we can see what the predictors are that cause the differences and remove them from the local model.

Be aware that there seems to be special scrutiny at EEOC on facial recognition technology, and not just in employment. This technology is under special scrutiny, in part, because of MIT research finding that facial recognition has had trouble with accuracy in detecting facial expressions of minorities, especially the facial expressions of women of color. A letter was sent to the acting chair of the EEOC from a group of U.S. senators in September 2018, including Sen. Kamala Harris, about its perspective on the use of facial recognition technology in employment selection and AI, and it is useful for labor lawyers to know this is a focus. To my knowledge, I haven’t heard an official response from the commission. This may be because the EEOC currently lacks a quorum. But it is something for labor lawyers to be aware of.

—MP McQueen

 



 

On the Radar:



Opt-in or Opt-out: That was a key part of a data privacy debate in Washington, D.C., this week that included tech privacy counsel and the Senate Judiciary Committee. Google, Intel and other company representatives and privacy advocates served as panelists for the hearing that focused on the recent data protection laws in California and the European Union. Read more from Caroline Spiezio here.

Staffing Up: Akin Gump Strauss Hauer & Feld has picked up U.S. Federal Trade Commission official Haidee Schwartz, who was most recently the acting deputy director of the Bureau of Competition. She saw the agency square off with major companies over a string of proposed combinations, such the merger of prosthetics makers Otto Bock HealthCare and FIH Group Holdings; fantasy sports platforms DraftKings and FanDuel’s attempted merger; and deals involving Staples and others. Read more from Ryan Lovelace here.

GCs on Legal Tech: Ian McDougall, the general counsel of LexisNexis, offered his predictions on legal tech adoption, increasing legal department sizes and changing inside-outside counsel relationships while speaking at a Stanford University event this past week. An increased focus on efficiency and value has also led in-house counsel to tap into advancing legal tech, he said, noting general counsel don’t want to pay outside counsel high rates for a job that could be done by a computer. Read more from Caroline Spiezio here.

Advertisement