On Monday, Taylor Lorenz posted a telling story about how Meta has been suppressing access to LGBTQ content across its platforms, labeling it as “sensitive content” or “sexually explicit.” Posts wi…
…I don’t see how this has any relevancy at all, since the whole purpose of an LLM is to make new – arguably derivative – works on an industrial scale, not just single copies for personal use.
Because it’s the same basic reason that hard-drives are legal. With a hard drive and a PC I can make practically infinite copies of copyrighted material off e.g. a DVD.
Only the wrongful redistribution of such copies is an actual crime, not providing the tooling for it, otherwise Seagate or whatever HDD manufacturer would be liable for copyright infringement I commit using it’s drives, so would my ISP if I distributed it etc etc.
The ruling was particularly clear: the basis for why VCRs were allowed to stay was because they had non-infringing uses. Same with hard drives.
I’m sure you see where I’m going with this. Because LLMs can allow you to generate things that (regardless of your opinion on whether all outputs are theft) wouldn’t stand up to substantial similarity tests in court, they have non-infringing uses.
Therefore, it’s the person who prompts and generates the image of copyright infringing material that’s responsible, not the researcher, patent-holder, programmer, dataset provider, supplier or distributor of the LLM.
Because it’s the same basic reason that hard-drives are legal. With a hard drive and a PC I can make practically infinite copies of copyrighted material off e.g. a DVD.
Only the wrongful redistribution of such copies is an actual crime, not providing the tooling for it, otherwise Seagate or whatever HDD manufacturer would be liable for copyright infringement I commit using it’s drives, so would my ISP if I distributed it etc etc.
The ruling was particularly clear: the basis for why VCRs were allowed to stay was because they had non-infringing uses. Same with hard drives.
I’m sure you see where I’m going with this. Because LLMs can allow you to generate things that (regardless of your opinion on whether all outputs are theft) wouldn’t stand up to substantial similarity tests in court, they have non-infringing uses.
Therefore, it’s the person who prompts and generates the image of copyright infringing material that’s responsible, not the researcher, patent-holder, programmer, dataset provider, supplier or distributor of the LLM.
Fair enough, thanks for the clarification.