AI lacks 'traditional human inventors' in copyright suits

In this article:

Legal precedents surrounding AI and patent and copyright infringement laws remain unclear, which Brown Neri Smith & Khan, LLP Partner Ryan Abbott discusses with Yahoo Finance Live. He notes core questions of whether businesses can patent AI-generated work lacking "traditional human inventors" and if original materials generated by AI can be legally copyrighted.

The issues tie into the New York Times (NYT) lawsuit against Microsoft (MSFT) for using its published content to train AI without permission. Abbott explains this case aims to determine if such training constitutes "copyright infringement" or if fair use provisions apply that legally allow it.

Overall, he says current intellectual property frameworks struggle with "machines behaving like people" creating original materials. Standards rely on human creators whereas AI autonomously produces work based on training from other individuals' prior content, creating the dilemma at hand.

"When an AI gets involved, it gets potentially a little bit trickier," Abbott tells Yahoo Finance, adding "You're asking... who is the inventor? Because we need an inventor to have someone own a patent. The right goes to the inventor and then it goes to who they've assigned the invention to or who is entitled to it."

Follow along with Yahoo Finance's AI Revolution special coverage this week, or watch this full episode of Yahoo Finance Live here.

Editor's note: This article was written by Angel Smith

Video Transcript

RACHELLE AKUFFO: Well, a wave of artificial intelligence lawsuits are starting to pile up in America's courtrooms. And one of those wider known lawsuits was filed last month just days before heading into 2024. The "New York Times" sued Microsoft and OpenAI, the startup behind ChatGPT, over copyright infringement.

The paper alleging millions of its articles were used to train AI programs and wants the companies to be held accountable for, quote, "billions of dollars" in statutory and actual damages. So could lawsuits like this rewrite the rules of AI? For more on how copyright law could threaten the AI industry is Ryan Abbott, LLP partner at Brown Neri Smith and Khan.

Thank you for joining us this morning. So I want to first draw the distinction between the copyright issues, and the patent issues, and, really, the questions that are being raised and not being covered by existing law.

RYAN ABBOTT: No. Thanks so much for having me. Well, you know, what we're fundamentally seeing is this new sort of activity where you have machines that are behaving like people and legal systems that were designed with human-centric concepts. And so in the patent context, for example, we're asking if a company like Novo Nordisk uses AI to develop Wegovy or Ozempic, can they patent something like that? Or do you need a traditional human inventor?

And this may be an issue where you have tech companies licensing drug discovery bottles to big pharma companies, and the people using the models don't really know how they work or how the output is validated, but it's generating useful output. And so this really questions some fundamental tenets of our IP system. In copyright, you have similar sorts of concerns.

So one is if you're using something like Midjourney or Stable Diffusion and you say, I'd like a graphic for Yahoo Finance, and it gives you a piece of artwork, whether you can copyright that or Yahoo can copyright that, and that case-- the "New York Times" case is one of many that have been brought recently against AI developers. And that case is alleging a number of things, but one of them is that it is copyright infringement to train AI models on copyright-protected content without permission.

So OpenAI, for their large language models, uses massive amounts of data that they've scraped from the internet. It would be, they allege, impossible or impractical to license that information because there are so many parties and there would be holdouts and the costs would be impractical. And in the US, it's an open question right now whether that is permitted under copyright law-- whether it is infringement or permissible under a standard we call fair use, because there are a lot of exceptions to copyright infringement-- for example, a human being training on information.

AKIKO FUJITA: Ryan, let's hone in on the part about patents, because you did testify before Congress. And you've called for an overhaul of the current patent law in place. You take that first example you just had about a pharmaceutical company developing a product with the assistance of AI or utilizing AI-- what does current law say about who owns the patent? And how does that need to change?

RYAN ABBOTT: Sure. Well, right now, most inventors don't own their patents. So Novo Nordisk may employ large teams of research scientists. At the end of the day, the company will generally own that patent. And it is not uncommon in drug discovery to have collaborations between multiple companies, and they'll work out who owns it by contract. And that can be kind of complicated.

When an AI gets involved, it gets potentially a little bit trickier as you're asking, you know, who is the inventor? Because we need an inventor to have someone own a patent. The right goes to the inventor, and then it goes to who they've assigned the invention to or who's entitled to it.

And you could potentially have a lot of people doing complex things in this scenario. Again, someone can be building a drug discovery model. Someone can be training it on a particular data set. Someone can be using it to generate output. Someone else could be validating the output.

And there really isn't law right now on, well, who in that is the inventor? And it can make a big difference to ownership if those individuals are at different companies, especially if they don't have contracts in place or if there's open source models. But right now, the law is, and this is recent as of last year, if you don't have someone that you can say that person made an inventive contribution, you can't get a patent right now.

And some companies like Siemens have reported that they've been unable to apply for patent applications where human beings involved in this process have said, I didn't do anything inventive. This is just AI output. And it was obviously valuable, and I'm not an inventor. So as AI does more and more in R&D the way it's doing right now in the creative industry, this is an increasing risk for businesses that use AI in R&D.

AKIKO FUJITA: So, Ryan, let's say that the law is changed so that a company or individual that owns the AI system can, in fact, own the patent for any inventions that come through the system. To what extent does that stifle innovation, particularly around the smaller players? Because the argument is that the bigger companies that are out there can fight some of these lawsuits. That's not necessarily the case for some other players.

RYAN ABBOTT: Sure. Well, you know, it's hard to say how this is going to develop. In the creative industry, to a certain extent, the release of these powerful models has really helped to democratize creativity. So I have very little creative skill, but if I wanted to make a graphic novel or even a short film, I could use generative platforms to help me do something that, in the past, realistically, only a music or a movie studio could produce.

So it may be that the release of more powerful AI models does return some power to SMEs that are looking to innovate. But on the theory that we have further industry consolidation and there's a few large pharma companies and a few large tech companies that are driving a lot of AI-enabled innovation, fundamentally, I think that would still be a pretty good social outcome, because you would have, then, Pfizer and GSK churning out new drugs, reducing the pipeline for validating those drugs' work, you know, resulting in everyone getting a lot more socially valuable innovation, which is really what the system is designed to do. And it's something where large players do have an advantage right now, including because they can afford the costs of prosecuting patents and litigating them, which can be quite costly.

RACHELLE AKUFFO: And, Ryan, how do you quantify harm in this situation? I know for Anthropic, one of their defenses was that they were looking for their accuser to show harm. But say you repurpose a picture, you've used to generate an image or a product-- how do you end up quantifying harm if, perhaps, they're not making a profit off of it? How does that end up playing out?

RYAN ABBOTT: Sure. Well, copyright law is different in every jurisdiction. In the United States, there are statutory damages for copyright infringement and for willful infringement-- so someone, essentially, knew something was protected by copyright and just didn't care and went ahead with it, you have six-figure statutory damages. So you don't need, in those instances, to prove actual harm.

And depending on how you're calculating times of infringement, potentially the damages could be astronomical. This was an issue in the Google Books cases where Google was digitizing libraries, and Authors Guild and other entities alleged that was infringement. And there were, potentially at least, billions of dollars of damages on the table.

Ultimately, the courts decided that was fair use. Ultimately, one can seek actual damages, and that does generally require that one is being deprived of some sort of commercial opportunity. So for individuals who aren't commercially artists but have a moral objection to their works being trained on AI systems, and there are a number of such individuals, they'd generally be seeking statutory damages.

AKIKO FUJITA: It's a fascinating, very convoluted process that's likely to play out here in the months and years ahead. Ryan Abbott, LLP partner at Brown Neri Smith Khan, really great to have you on today.

RYAN ABBOTT: Thanks so much.

Advertisement