![]() Rowling, for instance, or a song reminiscent of Taylor Swift - is a little too much like the original and begins to infringe on fair use. The makers of AI can install fair use filters that try to determine when the generated work - a chapter in the style of J.K. ![]() Henderson does have some recommendations for coming to grips with this growing concern. Such are the complex and thorny issues the legal system will have to resolve in the near future. So what happens when DALL-E’s art looks a little too much like an Andy Warhol transformation of a copyrighted work? Recently, the Supreme Court ruled that Andy Warhol’s famous painting of Prince, based on another artist’s photograph, was not fair use. Over the next few months and years, lawsuits will force courts to set new precedent in this area and draw the contours of copyright law as applied to generative AI. But what does it mean to “take down content” from a machine learning model? Even worse, it is not yet clear whether the DMCA even applies to generative AI, so there may be no opportunity to take down content. If infringing content appears on traditional platforms, like YouTube or Google, a law called the Digital Millennium Copyright Act lets the platform take down content. It is unlikely that this example would be fair use, according to the paper, but even that call is not a simple one. “Suddenly people are using their virtual assistants as audiobook narrators - free audiobook narrators,” he notes. “What happens when anyone can say to AI, ‘Read me, word for word, the entirety of Oh, the Places You’ll Go! by Dr. New AI tools - both their capability and scale - complicate this definition. “For example, might we be able to train models to only copy facts and never exact creative expression?” Raising QuestionsĪs AI tools continue to advance in capabilities and scale, they challenge the traditional understanding of fair use, which has been well defined for news reporting, art, teaching, and more. “There’s also an exciting research agenda in the field to figure out how to make models more transformative,” Henderson says. The scholars also survey some of the proposed strategies to deal with the problem - from filters on the input data and the output content that recognize when AI is pushing the boundaries too far to training models in ways more in line with fair use. Lemley, and Percy Liang, the paper provides a historical context of fair use - a legal principle that allows the use of copyrighted material in certain limited cases without fee or even credit - and lays out several hypotheticals to illustrate the knotty issues AI raises. Written with doctoral candidate Xuechen Li and Stanford professors Dan Jurafsky, Tatsunori Hashimoto, Mark A. Not only could there be civil liability but new precedent set by courts could dramatically curtail how generative AI is trained and used. The consequences of stepping outside fair use boundaries could be considerable. “There’s uncertainty about how lawsuits will come out in this area.” ![]() “People in machine learning aren’t necessarily aware of the nuances of fair use and, at the same time, the courts have ruled that certain high-profile real-world examples are not protected fair use, yet those very same examples look like things AI is putting out,” Henderson says. “There’s a lot to think about,” says Peter Henderson, a JD/PhD candidate at Stanford University and co-author of the recent paper, Foundation Models and Fair Use, laying out a complicated landscape. Most lay people haven’t given a second thought to the fact that most of the words and images in datasets behind artificial intelligent agents like Chat-GPT and DALL-E are copyrighted, but Peter Henderson thinks about it - a lot.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |