Colonizing Art
The euphoria surrounding generative AI often ignores ethics and rights, especially when it comes to artists whose work may be lifted for free and altered without consent.
By Payal Dhar
The much-hyped image-generating AIs such as Stable Diffusion and DALL-E can whip up original artwork with astonishing speed in response to natural-language text prompts. This is fantastic news for non-artists who want to create images. It’s not such good news for illustrators, artists, and photographers: Not only do the AIs threaten their livelihood, but the image-generating algorithms may have been “trained” by consuming the exact creative work that they now purport to replace.
To show you how it works, we created this logo image in the style of Salvador Dalí for a friend thinking of opening a coffee cafe in Manhattan.
The fear that generative AI may infringe on the rights and income of the creative class has received enormous press. Now a group of artists is bringing a class action lawsuit against Stability AI, DeviantArt, and Midjourney for using the image-creating tool Stable Diffusion, charging that it was trained on copyrighted work to generate images without compensation or consent. The complaint builds on the work of technologists Andy Baio and Simon Willison, who in late 2022 took a peek under the mechanisms powering Stable Diffusion. Examining the work of almost 2,000 artists in the system, they found 12 million unaltered, copyrighted images; Stable diffusion isn’t distributing the raw images, of course, but it is spitting them out in altered form.
Baio and Willison’s discovery highlighted the complex, often-confusing nature of generative AIs. Journalists and tech developers alike often get seduced by the word intelligence. In reality, however, these are tools whose workings reflect the designs and goals of their creators. AI is not a conscious agent, and it does not generate the raw materials from which art is made.
As with AI apps for writing (like ChatGPT, capable of generating poems, screenplays, and essays) or for coding (GitHub Copilot), tools like Stable Diffusion can perform complex creative tasks in a fraction of the time and money of a human artist by culling old images and styles produced by human artists in the past. The widespread tendency to attribute willful intent to these AIs obscures the real challenges we face. Generative art platforms truly do have great potential for reinventing and democratizing art, giving more people the opportunity to create in more skillful and expansive ways. But realizing that potential will require setting aside the modern mythologies about AI and reckoning with the acquisitive, colonial values that have quietly been built into many of them.
Human creatives are being appropriated to make the technology that will replace them.
AI is not necessarily antithetical to art. Artists themselves have been exploring its possibilities. One pioneer in this area is Lisbon-based Sofia Crespo, a neural artist who blends AI techniques into her vivid, hyperrealistic artworks that reimagine and recreate the natural world. “Art is always in motion,” she says, “and every time there’s a new artistic movement, we are forced to rethink…to redefine art.”
The problem lies not with the AIs but with their applications. Human creatives are being appropriated to make the technology that will replace them. It is a technological form of colonization: Instead of a violent capture of land and resources, the AI industry uses more subtle means to enrich the powerful few by taking resources from others with far less power. In a familiar pattern, the least successful, most marginalized creators stand to lose the most.
The engine behind these generative AI tools is machine learning—a process in which software algorithms “learn” about the world the way humans do, through experience rather than explicitly programmed knowledge. Vast amounts of data are required to train these learning-hungry machines. In the case of image- and text-generating AI, the training data come from publicly available text and images pulled off the internet. Public doesn’t mean copyright free, however, as Baio and Willison found when they investigated Stable Diffusion’s dataset.
“Artists’ works have been treated like a strip mine to load into these datasets, without any of them really being aware of it or, or consenting to it, and then building a tool to train an AI that’s going to use that dataset to learn from,” says Canadian nature artist Glendon Mellow. Mellow’s work was among those found in the Stable Diffusion dataset.
Art-generating AIs have been trained to learn the relationships between images and text. When a user enters a text prompt, the AIs can pull up visuals related to the relevant terms and generate an original image, adding details like highlights, reflections, shadows, and so on. They can also modify images, pick out different elements, or blend two or more images to come up with something new. As they learn from artists’ styles, the AIs can also mimic these, meaning that users can conceivably ask for an image in Mellow’s distinctive style, say, and not have to pay him.
In addition to using the using the work of uncompensated, non-consenting artists and other creators, companies like Microsoft (which invested billions in OpenAI, the makers of DALL-E), are engaged in another form of colonial exploitation. They are poised to profit from mining the commons—in this case, the internet as a whole. Machine learning consumes the entire online ecosystem, the immense, interconnected network of data, knowledge, information, and other resources accessible by anyone, anywhere with an internet connection. When big tech companies monopolize this common resource—using powerful technological tools to scrape freely available data to create a commercial application—it is akin to Nestlé commercializing water and selling it back to locals.
There are also representational harms perpetuated by the massive databases used by machine-learning applications, data scientist Carol Anderson explained in an email interview. For example, the Global South is underrepresented in the training data because it’s underrepresented on the web itself. “As a result, models trained on web-scraped data fail to adequately represent all of humanity,” Anderson says. “If you ask a text-to-image model to make a picture of a lawyer, you’ll almost certainly get an image of a white man in Western clothing.”
Some AI developers contend that training generative AI on existing creative work is similar to the way that humans are inspired by other artists. Bangalore-based illustrator and graphic designer Vinayak Varma calls the argument a “false distinction” that grossly oversimplifies how creativity works. Good art comes from years of honing technique and application, he says, and also from the sum of one’s lived experience: “Great personal loss, big victories and grand loves, stories forged in the heat of living, voyages attempted despite insurmountable odds, throwaway observations during a boring commute, your favorite cat, the way the morning light hits that one grape, bad politics, a shooting pain in your left ear, whatever,” he says.
“Learning to see shapes and colors through the eyes of another artist is but a fraction of what goes into building your own artistic practice. To think one can average out these things with an algorithm is to think you can stuff a living person into a thimble.”
AI-generated images are being used in place of stock photos taken by real photographers.
AI colonization is already starting to provide profits for the colonizers. AI-generated images are being used in place of stock photos taken by real photographers. Shutterstock, a leading image source of stock art, recently announced plans to offer access to DALL-E alongside their collection of licensed images. “Shutterstock images were included in the DALL-E training data,” Anderson notes, “so we have an interesting situation where the AI output will be competing directly against the art used to train it.”
The rapid commercialization of generative AIs, riding slipshod over issues of ethics, consent, and copyright, enrages artists. Midjourney charges anywhere from $10 a month for a basic subscription to $600 per year for pro ones. DALL-E has a pay-as-you-go model as well. “Corporations should have checked with the artistic community before using their work,” Crespo says.And individual artists’ works are so entrenched in the platforms that mimicry is accomplished with ease; the app, AI Text Prompt Generator, includes the names of 1,500 artists that users can emulate on the art engines without their compensation or consent.
Creators are starting to fight back with legal challenges. In November 2022, designer, programmer, and lawyer Matthew Butterick teamed up with the Joseph Saveri Law Firm in San Francisco to file a class-action lawsuit against the makers of GitHub Copilot, a code-writing assistant, for violating the open source copyright licenses under which GitHub programmers share their code. The same team is behind the art-focused Stable Diffusion class-action lawsuit.
Butterick anticipates more such lawsuits on the way. “Five or six years ago, we heard a lot from AI enthusiasts and researchers about the social and economic risks of AI, and the importance of ethics and safety,” he told OpenMind via email. “But as more investment money has entered the field, it feels like those guardrails are being dismantled. Big tech companies are working hard to construct economic, moral, and legal arguments to harvest the vast quantities of data that AI systems demand. But these systems are not magical black boxes that are exempt from the law.”
Equity, the UK’s union of 47,000-plus performers and creative practitioners, including artists and designers, runs a campaign called Stop AI Stealing the Show, advocating for the UK’s intellectual property laws to be updated to keep pace with AI. Most Equity members believe AI presents a threat to employment opportunities in the performing arts, according to Liam Budd, who leads the union’s work around AI. That concern was borne out for voice artists in 2023, when Apple launched AI-narrated audiobooks.
Budd recognizes that AI is not going away, but wants it to support artists rather than undermining them. “We want to make sure that there is a system of consent, including ownership over how it is used,” he says. He is concerned about the opaque language of contracts on the scope of rights, reporting that some Equity members accepted recording work for research purposes, only later realizing that they had signed away their rights to use their voice or voice likenesses for commercial purposes. He also believes creatives engaging with AI works deserve fair compensation: “Often they’re just one-off payments, which don’t reflect the fact that their contribution can be used forever, on thousands of projects around the world.”
The technology exists in a lawless dystopia.
Comprehensive, modernized copyright laws that take AI-created content into account will have to address all of these concerns. At present, the technology exists in a lawless utopia—or dystopia. Nations are scurrying to pass AI regulations, but many of these laws appear aimed at enabling the AI industry rather than protecting creatives from colonial exploitation. The UK, for example, is considering a copyright exemption that would protect data-mining for AI development, preventing rights holders from opting out. “This could have potentially devastating consequences for the creative industry,” Budd says.
Many artists argue that, at the very least, machine learning models that generate art need to credit, track, and pay the creators of the training data used in them. As British writer and illustrator Kyle Webster tweeted last September: “All I want is for artists to have a bigger seat at the table—every table where this tech is being created.”
AI’s colonization problem extends deeper than the issues of credit and compensation, however. Currently, a large portion of research dollars in the field comes from a small group of affluent, mostly white male stakeholders whose decisions will shape the long-term impact of AI. This top-down approach has traditionally directed the most benefits to the already privileged. For the situation to change, the tech industry must reckon with its history and reassess its raison d’être; it must take regulation seriously and respond to the challenges with reform rather than defensive legal action. Artists, too, have to find ways to incorporate AI into their creative work so that they share in the benefits.
Butterick likens the current crisis in generative AI to the Napster controversy in 1999, when the peer-to-peer music download software threatened the music industry. “At the time, some people said Napster was the future; others said it was completely illegal. It turned out both views were correct,” he says. Napster failed because of mass copyright infringement, but it led companies like Spotify and Apple to bring copyright owners music into the conversation and to create a vast ecosystem of streaming music, Butterick observes. “Something like that is going to happen for AI as well.” This story originally appeared on OpenMind, a digital magazine tackling science controversies and deceptions.