When Edmund Cartwright was at work creating the world』s first power loom at the dawn of the Industrial Revolution, he sent a prototype of his machine to weavers in Manchester, which was, at the time, the center of England』s cloth production. Cartwright was hoping these weavers could help him improve his nascent invention. They refused.

As recounted in Blood in the Machine, tech journalist Brian Merchant』s history of the Luddite movement, textile workers destroyed the machines and factories that had undermined their wages, lowered the quality of working conditions, and eventually, made them obsolete. Understandably, weavers were not keen to contribute to something intended to replace them. Their choice was no mystery to Cartwright.

Related Articles

A student in a yellow jacket rides a bike in front of a building at the campus of Ringling College Art and Design, a private arts school in Sarasota, Florida. Ringling recently started offering a certificate in AI for its students.

More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits

AI Is Trying to Take Over Art Authentication, but Longtime Experts Are Skeptical

「Indeed, the workmen who had undertaken it despaired of ever making it answer the purpose it was intended for,」 wrote Cartwright in a letter to a friend.

More than two hundred years later, we are living amid another pivotal moment in labor history: the widespread introduction of artificial intelligence. But unlike the weavers of yore, there are artists and creatives who are willing to cooperate with companies developing the very tools designed to replace them—or, at least, diminish their labor—whether it』s contemporary artists accepting residencies or filmmakers joining beta testing programs.

Why?

For artists like Refik Anadol and Alex Reben, who have been artists-in-residence for NVIDIA and OpenAI, respectively, there is simply no threat of 「being replaced」 akin to what the now extinct weavers experienced. Artists with a capital A don』t work in a traditional labor market, so opportunities to work with AI companies represent an exciting opportunity to bend powerful new technology into new artistic tools.

「AI is the new canvas. This is the new painting. This is the new brush,」 Anadol told ARTnews. 「So NVIDIA is providing a brush, they』re providing a pigment, they』re providing a canvas.」

LONDON, ENGLAND - FEBRUARY 15: Artist Refik Anadol poses for photographers at the Serpentine North Gallery during a press preview of a new exhibition of his AI generated work on February 15, 2024 in London, England. As part of his solo exhibit
Artist Refik Anadol poses at his new exhibition at the Serpentine North Gallery in February. For the show, Anadol unveiled a new immersive environment made from 5 billion images of coral reefs and rainforests, using Stable Diffusion. Getty Images

Anadol has found major success using machine-learning algorithms to produce site-specific immersive installations, live audiovisual performances, and artworks tokenized on the blockchain. In his practice, Anadol primarily creates 「data sculptures」 that visualize vast quantities of data on everything from the environment to art history. The artist became Google』s first artist-in-residence in 2016, the same year he began working with NVIDIA. The two companies provided the support to make works that require significant data-processing both when he was an artist-in-residence and as an independent artist.

In 2022 Anadol worked with the Museum of Modern Art in New York to create Unsupervised – Machine Hallucinations – MoMA, a generative artwork that uses the museum』s visual archive to produce a machine learning model that interprets and reimagines images of artworks in MoMA』s collection. The museum acquired the work after it was displayed in the lobby for nearly a year.

For Unsupervised, NVIDIA donated two supercomputers: one to process the 138,000 images in the museum』s public archive and the other to 「dream」 the visualization displayed on a 24-foot-tall high-res screen. What NVIDIA gave Anadol was not software—Anadol and his studio work together to write custom software—but sheer processing power, which is, at best, extremely cost-prohibitive.

「To make work with AI you need strong computation,」 Anadol explained. 「There』s no way to do research or work with millions of images without supercomputers, and I』m not a company or a giant that can buy billions of dollars』 worth of GPUs [graphics processing units].」

NVIDIA makes Anadol』s art possible, and not just Unsupervised, but most of his work. NVIDIA, he added, doesn』t donate this computing power for monetary gain but rather because they want to support artistic discoveries and breakthroughs.

NEW YORK, NY - NOVEMBER 08:  President and CEO BlabDroid, Alexander Reben and Editor-in-Chief, Engadget Michael Gorman speak at Engadget Expand New York 2014  at Javits Center on November 8, 2014 in New York City.  (Photo by Bryan Bedder/Getty Images for Engadget Expand)
Alexander Reben speaks at Engadget Expand New York at the Javits Center on November 8, 2014. Bryan Bedder

Alex Reben, meanwhile, told ARTnews that artists and artist-researchers have always worked with companies and institutions to develop and test the potential of new tools, whether Xerox machines, acrylic paint, or computer plotters.

In the late 1960s, artists Harold Cohen and Vera Molnár made some of the first computer artworks in the late 1960s after gaining access to university research labs. Around the same time, engineers from Bell Laboratories teamed up with artists to create Experiments in Art and Technology, a nonprofit that facilitated collaboration between artists and engineers. Electrical engineer Billy Klüver, a founder of the group, worked with John Cage, Andy Warhol, Robert Rauschenberg, and other artists to create groundbreaking projects. In the late 1980s, composer Tod Machover began creating computer-enhanced Hyperinstruments like the Hyperviolin and Hyperpiano at the Massachusetts Institute of Technology』s Media Lab.

As with early computers, accessing AI—a metonym for many different but related technologies—has meant accessing the institutions that develop them. But, these days, it is businesses more than universities that have the kind of processing power artists are hungry to work with.

At the Christie』s Art and Tech Summit this past July, Reben gave me a demo of the 「conceptual camera」 he developed as an artist-in-residence at OpenAI, the preeminent generative AI company of the moment, having released industry-leading platforms like text generator ChatGPT, image generator DALL-E, and the recently unveiled video generator, Sora. Reben, who began working with OpenAI as a beta tester years ago, built the conceptual camera as an AI software application. The app took photos captured on his phone and then transformed them, using DALL-E, into AI-generated artworks printed out on Polaroids, or poems printed out as receipts. During an earlier Zoom demonstration, the app had come off as slightly gimmicky, but in person, the demo filled me with genuine wonder. Reben handed me a marker and told me to draw a picture. I doodled the devil. After he took a picture of the drawing, he tapped a couple buttons on the app and then we watched the photo develop on the Polaroid printer. The black square revealed the AI-generated image that took inspiration from my drawing: a ghostly figure emerged, a mannequin head sporting ram horns. The program never makes the same image twice and produces them in a variety of styles.

On the left, the drawing fed into Alexander Reben』s 「conceptual camera.」 On the right, the image produced by the image generator printer. Shanti Escalante De-Mattei

The technology required to produce the image was impressive, but, looking past the sparkle, it raised complicated ethical questions. For artist, writer, and activist Molly Crabapple, AI companies like NVIDIA, OpenAI, and others, represent environmental degradation and massive job loss for creatives.

「These companies are trying to launder their reputations by using high-end artists so they can say they are the friends of artists when in reality they are kicking working-class artists in the teeth every day,」 Crabapple told ARTnews. 「They』re just scabbing. And given the environmental costs of AI, it』s the equivalent of doing a residency with British Petroleum.」

In May, Goldman Sachs Research estimated that data center power consumption will grow by 160 percent by 2030 due to AI, while carbon dioxide emissions from those centers may double. Meanwhile, both Google and Microsoft have made revisions to their sustainability goals, which Wired and the Wall Street Journal have reported is tied to their AI power consumption.

Crabapple makes a distinction between 「high-end」 artists who sell their original artwork, show at institutions and galleries, and have a certain kind of prestige versus working artists like illustrators or animators who are hired by clients to make a particular artistic or commercial product, anything from an advertisement to a Pixar movie. In her view, by working with the former, tech companies shift the conversation from job obsoletion to new forms of creativity.

The tech giants have typically pushed the line that AI will make jobs more efficient or productive, not obsolete. However, during a talk at Dartmouth this past June, OpenAI chief technology officer Mira Murati bungled the company line.

「Maybe some creative jobs will go away, but maybe they shouldn』t have been there in the first place,」 she told the crowd.

Crucially, the 「creative jobs」 Murati referenced are not those held by contemporary fine artists, who don』t do wage work and so are not vulnerable to the whims of bosses trying to cut down on labor costs. Working artists, like the animators and illustrators that Crabapple talks about, are thus faced with a tough decision: resist automation to try to keep artistic traditions alive, or retrain their skills.

For Sway Molina, an actor, artist, and filmmaker who started working last year with AI during the ongoing hiring slump in the film industry (dubbed the Hollywood Contraction), the answer is simple: join up before it』s too late. Molina is a member of AI company Runway』s Creative Partners Program, a beta testing program that provides qualified creatives with early access to Runway』s text-to-video building tools.

「Everything is going to shift and change in ten years, and those who stay behind are the people that resist,」 Molina told ARTnews.

While Molina might come off as harsh, he said he simply doesn』t have much faith that film unions will be able to protect jobs when studios eventually cut deals with AI companies. (Bloomberg reported in May that Alphabet and Meta have already approached film studios about potential partnerships.)

The job loss appears to have begun already. The Animation Guild, meanwhile, found in its AI Task Force study, released this past January, that 75 percent of survey respondents—which included hundreds of C-suite leaders, senior executives, and mid-level managers across six key entertainment industries—said that generative AI tools, software, or models had already resulted in job elimination, reduction, or consolidation in their business division. (One bright spot: only 26 percent thought generative AI would be fully integrated in the next three years.) This past July, Merchant reported for Wired that job losses in the video game industry are already in the thousands, and remaining artists are being forced to use AI in their creative process.

「Generative AI can most capably produce 2D images that managers in cost-squeezed studios might consider 『good enough,』 a term AI-watching creative workers now use as shorthand for the kind of AI output that』s not a threat to replacing great art, but is a threat to their livelihoods,」 Merchant wrote.

For Molina, adopting early means protecting against his own job loss. 「It』s the early tinkerers of today that become the creative leaders of tomorrow,」 Molina said. 「Those people who are just endlessly posting, posting, posting their AI works are the [ones] being set up as creative directors and AI community leaders.」

A still from Sway Molina』s Our T2 Remake (2024).

In the spirit of showing his colleagues what AI is poised to do, Molina produced a feature-length parody of Terminator 2: Judgment Day (1991), starring a cyborg teddy bear and loaded with jokes about AI spoken with Arnold Schwarzenegger』s thick Austrian accent, his likeness and voice reconstituted and remixed courtesy of new AI tools from Runway and other companies. The movie, Our T2 Remake (2024), is nearly unwatchable, with uncanny figures, objects that don』t obey the laws of physics, and faces that morph and melt without logic. And yet, it was made in 6 months as opposed to the usual 6 years, with 50 animators instead of hundreds.

With the tech developing so rapidly, one can squint and see where generative AI might be going. At least that』s what AI companies are hoping.

「We joke and say that if our tools can』t do something that you want now, maybe just wait a few weeks and likely we』ll be able to do it by then, because that is quite literally how quickly it has been moving,」 Emily Golden, who heads growth marketing at Runway, which includes the Creative Partners Program, told ARTnews.

Many AI companies have beta testing programs similar to Runway』s, Golden said, adding that Runway hopes to use its own to build community. On X, users experimenting with text-to-video generation post their clips, music videos, surreal shorts, crowd-sourced solutions, and discuss developments in the field. While some are longtime creatives, many have never made images or videos before using AI tools. The community provides Runway early (and copious) testing of its products—before they go out to clients—and free marketing.

Whether it』s fine artists like Anadol and Reben taking up artist residencies or working artists joining beta testing programs, the advantage seems to be getting early access to cutting-edge tools that both they and the tech companies that make them can point to as expanding creativity, rather than killing jobs.

And yet, the numbers speak for themselves.