The world of digital art is becoming complex as AI systems gain popularity. They are also testing the limits of laws and ethics
Two big questions are emerging that artist, attorneys and users are asking. Whose art is it anyway and how will the law keep up with digital art?
To understand why these AI systems are coming under scrutiny, one needs to know how they work. These platforms train themselves on millions of digitised artwork to produce a new piece of art, based on text prompts, in seconds. Anyone can use them—from seasoned painters to amateur artists.
A groundbreaking copyright lawsuit has been initiated against AI art generator companies, leading to potential implications for the content produced by these tools. The lawsuit, filed by a group of artists including Sarah Andersen, Kelly McKernan, and Karla Ortiz, targets Stability AI Ltd., Midjourney Inc., and DeviantArt Inc. The artists allege that these companies unlawfully utilized copyrighted images without obtaining consent or compensating the artists. In addition to seeking damages, the lawsuit requests that the court prohibits the AI generator companies from using artists’ work without permission, potentially disrupting the current content generation process of AI tools.
Getty Images leads a list of pending lawsuits
Several lawsuits are already in the works and experts say more are sure to come.
Getty Images, the photo licensing company, filed a lawsuit against Stability AI, creators of Stable Diffusion, which can create photo realistic images from text, alleging that the company copied 12 million images without permission or compensation “to benefit Stability AI’s commercial interests and to the detriment of the content creators.”
Stability AI, DeviantArt and Midjourney are also involved in a lawsuit that alleges that the companies’ use of AI violates the rights of millions of artists. “These images were all taken without consent, without compensation. There’s no attribution or credit given to the artists,” said Matthew Butterick, a lawyer and computer programmer involved in the lawsuit. Butterick is also involved in another class-action lawsuit against Microsoft, GitHub, which is owned by Microsoft, and OpenAI — in which Microsoft is a major investor — alleging that GitHub’s Copilot system, which suggests lines of code for programmers, does not comply with terms of licensing.
Prisma Labs, the company behind the viral Lensa app, which creates avatars, is facing a lawsuit alleging the company illegally took users’ biometric data. TikTok recently settled a lawsuit with voice actress Bev Standing, who said the company used her voice without her permission for its text-to-speech feature.
While tech companies are highlighting the benefits of generative AI and are quickly integrating the technology into their products, media companies and creators see the downsides of their copyrighted work being co-opted. “Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay a commission or license an original image from that artist. Now those purchasers can use the artist’s work without compensating the artist at all,” the class-action court filing against Stable Diffusion states. The similarities can sometimes be obvious — there are even instances where Stable Diffusion recreated the Getty Images watermark in its work, which Getty details in its legal filing.
Stability AI did not provide a comment by press time.
Shutterstock, OpenAI and artist compensation
Technology has changed many jobs but creative industries have largely stayed out of the fray. Until now. Companies are selling AI-generated prints and Stable Diffusion can learn to copy an artist’s style within hours. Voice actors are being asked to sign the rights to their voices so artificial intelligence can recreate synthetic versions, presumably to replace them. Writers worry about their work or style being co-opted without permission or compensation.
“Web-scraping and machine learning from huge digital databases requires massive amounts of human-created work as the sample. We want to make sure that work is legally and respectfully accessed, and that we are paid for its use,” said John Degen, chief executive officer of The Writers’ Union of Canada and chair of the International Authors Forum.
At least one company is looking to compensate human creators. Shutterstock, a provider of stock photography, footage and music which has worked with OpenAI since 2021 — OpenAI CEO Sam Altman has said it was “critical to the training” of its generative AI image and art platform DALL-E — has set up a contributors fund. It compensates content creators if their IP is used in the development of AI-generative models. Moreover, creators will receive royalties if new content produced by Shutterstock’s AI generator includes their work. Contributors can opt out and exclude their content from any future datasets.
Meanwhile, both Microsoft and Google continue to launch additional AI-enabled capabilities across products. Microsoft announced last month it will embed OpenAI’s ChatGPT into Microsoft 365 apps while Google said it will bring generative AI to Gmail and Google Docs. A Microsoft spokesperson said AI-generated content will be clearly labeled, encouraging users to review, fact-check and adjust. It will also make citations and sources easily accessible by linking to an email or document the content is based on or a citation when a user hovers over it. A Google spokesperson said Bard, the company’s competitor to ChatGPT, “is intended to generate original content and not replicate existing content at length.” The spokesperson also said Bard is meant to be a “complementary experience” to Google search, and includes a “Google It” button so people can move from Bard to explore information on the web.
Given how new generative AI is, it’s not surprising the legal system has yet to catch up. In the meantime, companies and individuals will be duking out their rights in court.