On Monday, Taylor Lorenz posted a telling story about how Meta has been suppressing access to LGBTQ content across its platforms, labeling it as “sensitive content” or “sexually explicit.” Posts wi…
Training LLMs on copyright material isn’t illegal to begin with, just like how learning from a pirated book isn’t or having drugs in your system isn’t, only being in possession of these things is illegal.
GDPR violations are on the other hand - illegal. You’re right in principle, don’t get me wrong and I appreciate your healthy cynicism but in this particular case being slapped with a GDPR fine is actually not worth keeping the data of one user.
Edit: Downvoted for being right as usual. Bruh Lemmy is becoming more and more like Reddit every day.
Training LLMs on copyright material isn’t illegal to begin with
Reproducing identifiable chunks of copyrighted content in the LLM’s output is copyright infringement, though, and that’s what training on copyrighted material leads to. Of course, that’s the other end of the process and it’s a tort, not a crime, so yeah, you make a good point that the company’s legal calculus could be different.
To further refine the point, do you know of any lawsuits that were ruled successfully on the basis that as you say - the company that made the LLM is responsible because someone could prompt it to reproduce identifiable chunks of copyright material? Which specific bills make it so?
Wouldn’t it be like suing Seagate because I use their hard drives to pirate corpo media? I thought Sony Corp. of America v. Universal City Studios, Inc. would serve as the basis there and just like Betamax it’d be distribution of copyright material by an end user that would be problematic, rather than the potential of a product to be used for copyright infringement.
To be clear, I think it ought to be the case that at least “copyleft” GPL code can’t be used to train an LLM without requiring that all output of the LLM become GPL (which, if said GPL training data were mixed with proprietary training data, would likely make the model legally unusable in total). AFAIK it’s way too soon for there to be a precedent-setting court ruling about it, though.
In particular…
I thought Sony Corp. of America v. Universal City Studios, Inc. would serve as the basis there
…I don’t see how this has any relevancy at all, since the whole purpose of an LLM is to make new – arguably derivative – works on an industrial scale, not just single copies for personal use.
…I don’t see how this has any relevancy at all, since the whole purpose of an LLM is to make new – arguably derivative – works on an industrial scale, not just single copies for personal use.
Because it’s the same basic reason that hard-drives are legal. With a hard drive and a PC I can make practically infinite copies of copyrighted material off e.g. a DVD.
Only the wrongful redistribution of such copies is an actual crime, not providing the tooling for it, otherwise Seagate or whatever HDD manufacturer would be liable for copyright infringement I commit using it’s drives, so would my ISP if I distributed it etc etc.
The ruling was particularly clear: the basis for why VCRs were allowed to stay was because they had non-infringing uses. Same with hard drives.
I’m sure you see where I’m going with this. Because LLMs can allow you to generate things that (regardless of your opinion on whether all outputs are theft) wouldn’t stand up to substantial similarity tests in court, they have non-infringing uses.
Therefore, it’s the person who prompts and generates the image of copyright infringing material that’s responsible, not the researcher, patent-holder, programmer, dataset provider, supplier or distributor of the LLM.
Training LLMs on copyright material isn’t illegal to begin with, just like how learning from a pirated book isn’t or having drugs in your system isn’t, only being in possession of these things is illegal.
GDPR violations are on the other hand - illegal. You’re right in principle, don’t get me wrong and I appreciate your healthy cynicism but in this particular case being slapped with a GDPR fine is actually not worth keeping the data of one user.
Edit: Downvoted for being right as usual. Bruh Lemmy is becoming more and more like Reddit every day.
Reproducing identifiable chunks of copyrighted content in the LLM’s output is copyright infringement, though, and that’s what training on copyrighted material leads to. Of course, that’s the other end of the process and it’s a tort, not a crime, so yeah, you make a good point that the company’s legal calculus could be different.
Thank you, I’m glad someone is sane ITT.
To further refine the point, do you know of any lawsuits that were ruled successfully on the basis that as you say - the company that made the LLM is responsible because someone could prompt it to reproduce identifiable chunks of copyright material? Which specific bills make it so?
Wouldn’t it be like suing Seagate because I use their hard drives to pirate corpo media? I thought Sony Corp. of America v. Universal City Studios, Inc. would serve as the basis there and just like Betamax it’d be distribution of copyright material by an end user that would be problematic, rather than the potential of a product to be used for copyright infringement.
https://www.youtube.com/watch?v=uY9z2b85qcE
To be clear, I think it ought to be the case that at least “copyleft” GPL code can’t be used to train an LLM without requiring that all output of the LLM become GPL (which, if said GPL training data were mixed with proprietary training data, would likely make the model legally unusable in total). AFAIK it’s way too soon for there to be a precedent-setting court ruling about it, though.
In particular…
…I don’t see how this has any relevancy at all, since the whole purpose of an LLM is to make new – arguably derivative – works on an industrial scale, not just single copies for personal use.
Because it’s the same basic reason that hard-drives are legal. With a hard drive and a PC I can make practically infinite copies of copyrighted material off e.g. a DVD.
Only the wrongful redistribution of such copies is an actual crime, not providing the tooling for it, otherwise Seagate or whatever HDD manufacturer would be liable for copyright infringement I commit using it’s drives, so would my ISP if I distributed it etc etc.
The ruling was particularly clear: the basis for why VCRs were allowed to stay was because they had non-infringing uses. Same with hard drives.
I’m sure you see where I’m going with this. Because LLMs can allow you to generate things that (regardless of your opinion on whether all outputs are theft) wouldn’t stand up to substantial similarity tests in court, they have non-infringing uses.
Therefore, it’s the person who prompts and generates the image of copyright infringing material that’s responsible, not the researcher, patent-holder, programmer, dataset provider, supplier or distributor of the LLM.
Fair enough, thanks for the clarification.