Reading view

There are new articles available, click to refresh the page.

The Problem with AI “Artists”

A performance reel. Instagram, TikTok, and Facebook accounts. A separate contact email for enquiries. All staples of an actor’s website.

Except these all belong to Tilly Norwood, an AI “actor.”

This creation represents one of the newer AI trends, which is AI “artists” that eerily represent real humans (which, according to their creators, is the goal). Eline Van der Velden, the creator of Tilly Norwood, has said that she is focused on making the creation “a big star” in the “AI genre,” a distinction that has been used to justify the existence of AI created artists as not taking away jobs from real actors. Van der Velden has explicitly said that Tilly Norwood was made to be photorealistic to provoke a reaction, and it’s working, as reportedly talent agencies are looking to represent it.

And it’s not just Hollywood. Major producer Timbaland has created his own AI entertainment company and launched his first “artist,” TaTa, with the music created by uploading demos of his own to the platform Suno, reworking it with AI, and adding lyrics afterward.

But while technologically impressive, the emergence of AI “artists” risks devaluing creativity as a fundamentally human act, and in the process, dehumanizing and “slopifying” creative labor.

Heightening Industry at the Expense of Creativity

The generative AI boom is deeply tied to creative industries, with profit-hungry machines monetizing every movie, song, and TV show as much as they possibly can. This, of course, predates AI “artists,” but AI is making the agenda even clearer. One of the motivations behind the Writer’s Guild Strike of 2023 was countering the threat of studios replacing writers with AI.

For industry power players, employing AI “artists” means less reliance on human labor—cutting costs and making it possible to churn out products at a much higher rate. And in an industry already known for poor working conditions, there’s significant appeal in dealing with a creation they do not “need” to treat humanely.

Technological innovation has always posed a risk to eliminating certain jobs, but AI “artists” are a whole new monster in industry. It isn’t just about speeding up processes or certain tasks but about excising human labor from the product. This means in an industry that is already notoriously hard to make money in as a creative, the demand will become even more scarce—and that’s not even looking at the consequences on the art itself.

The AI “Slop” Takeover

The interest of making money over quality has always prevailed in industry; Netflix and Hallmark aren’t making all those Christmas romantic comedies with the same plot because they’re original stories, nor are studios embracing endless amount of reboots and remakes based on successful art because it would be visionary to remake a ’90s movie with a 20-something Hollywood star. But they still have their audiences, and in the end, require creative output and labor to be made.

Now, imagine that instead of these rom-coms cluttering Netflix, we have AI-generated movies and TV shows, starring creations like Tilly Norwood, and the soundtrack comes from a voice, lyrics, and production that was generated by AI.

The whole model of generative AI is dependent on regurgitating and recycling existing data. Admittedly, it’s a technological feat that Suno can generate a song and Sora can convert text to video images; what it is NOT is a creative renaissance. AI-generated writing is already taking over, from essays in the classroom to motivational LinkedIn posts, and in addition to ruining the em dash, it consistently puts out material of low and robotic quality. AI “artists” “singing” and “acting” is the next uncanny destroyer of quality and likely will alienate audiences, who turn to art to feel connection.

Art has a long tradition of being used as resistance and a way of challenging the status quo; protest music has been a staple of culture—look no further than civil rights and antiwar movements in the United States in the 1960s. It is so powerful that there are attempts by political actors to suppress it and punish artists. Iranian filmmaker Jafar Panahi, who won the Palme d’Or at the Cannes Film Festival for It Was Just an Accident, was sentenced to prison in absentia in Iran for making the film, and this is not the first punishment he has received for his films. Will studios like Sony or Warner Bros. release songs or movies like these if they can just order marketing-compliant content from a bot?

A sign during the writer’s strike famously said “ChatGPT doesn’t have childhood trauma.” An AI “artist” may be able to carry out a creator’s agenda to a limited extent, but what value does it have coming from a generated creation that has no lived experiences and emotions—especially when this drives motivation to make art in the first place?

To top it off, generative AI is not a neutral entity by any means; we’re in for a lot of stereotypical and harmful material, especially without the input of real artists. The fact most AI “artists” are portrayed as young women with specific physical features is not a coincidence. It’s an intensification of the longstanding trend of making virtual assistants—from ELIZA to Siri to Alexa to AI “artists” like Tilly Norwood or Timbaland’s TaTa—“female,” which reinforces the trope of relegating women to “helper” roles that are designed to cater to the needs of the user, a clear manifestation of human biases.

Privacy and Plagiarism

Ensuring that “actors” and “singers” look and sound as human as possible in films, commercials, and songs requires that they be trained on real-world data. Tilly Norwood creator Van der Welden has defended herself by claiming that she only used licensed data and went through an extensive research process, looking at thousands of images for her creation. But “licensed data” does not make taking the data automatically ethical; look at Reddit, which signed a multimillion dollar contract to allow Google to train its AI models on Reddit data. The vast data of Reddit users is not protected, just monetized by the organization.

AI expert Ed Newton-Rex has discussed how generative AI is consistently stealing from artists, and has proposed measures in place to make sure data is licensed and trained in the public domain to be used in creating. There are ways for individual artists to protect their online work: including watermarks, opting out of data collection, and taking measures to block AI bots. While these strategies can keep data more secure, considering how vast generative AI is, they’re probably more a safeguard than a solution.

Jennifer King from Stanford’s Human-Centered Artificial Intelligence has provided some ways to protect data and personal information more generally, such as making “opt out” the default option for data sharing, and for legislation that focuses not just on transparency of AI use but on its regulation—likely an uphill battle with the Trump administration trying to take away state AI regulations.

This is the ethical home that AI “artists” are living in. Think of all the faces of real people that went into making Tilly Norwood. A company may have licensed that data for use, but the artists whose “data” is their likeness and creativity likely didn’t (at least directly). In this light, AI “artists” are a form of plagiarism.

Undermining Creativity as Fundamentally Human

Looking at how art has been transformed by technology before generative AI, it could be argued that this is simply the next step in the process of change rather than something to be concerned about. But photography and animation and typewriters and all the other inventions used to justify the onslaught of AI “artists” were not eliminations of human creativity. Photography was not a replacement to painting but a new art form, even if it did concern painters. There’s a difference between having a new, experimental way of doing something and extensively using data (particularly data that is taken without consent) to make creations that blur the lines of what is and isn’t human.  For instance, Rebecca Xu, a professor of computer art and animation at Syracuse who teaches an “AI in Creative Practice” course, argues that artists can incorporate AI into their creative process. But as she warns, “AI offers useful tools, but you still need to produce your own original work instead of using something generated by AI.”

It’s hard to understand exactly how AI “artists” benefit human creativity, which is a fundamental part of our expression and intellectual development. Just look at the cave art from the Paleolithic era. Even humans 30,000 years ago who didn’t have secure food and shelter were making art. Unlike other industries, art did not come into existence purely for profit.

The arts are already undervalued economically, as is evident from the lack of funding in schools. Today, a kid who may want to be a writer will likely be bombarded with marketing from generative AI platforms like ChatGPT to use these tools to “write” a story. The result may resemble a narrative, but there’s not necessarily any creativity or emotional depth that comes from being human, and more importantly, the kid didn’t actually write. Still, the very fact that this AI-generated story is now possible curbs the industrial need for human artists.

How Do We Move Forward?

Though profit-hungry power players may be embracing AI “artists,” the same cannot be said for public opinion. The vast majority of artists and audiences alike are not interested in AI-generated art, much less AI “artists.” The power of public opinion shouldn’t be underestimated; the writer’s strike is probably the best example of that.

Collective mobilization thus will likely be key in the future when it comes to challenging AI “artists” against the interest of studios, record labels, and other members of the creative industry’s ruling class. There have been wins already, such as the Writer’s Guild of America Strike in 2023, which resulted in a contract stipulating that studios can’t use AI as a credited writer. And because music and film and television are full of stars, often with financial and cultural power, the resistance being voiced in the media could benefit from more actionable steps; for example, maybe a prominent production company run by an A-list actor pledges not to have any “artists” generated by AI in their work.

Beyond industry and labor, the devaluing of art as unimportant unless you’re a “star” can also play a significant role in changing conversations around it. This means funding art programs in schools and libraries so that young people know that art is something they can do, something that is fun and that brings joy—not necessarily to make money or a living but to express themselves and engage with the world.

The fundamental risk of AI “artists” is that they will become so commonplace that it will feel pointless to pursue art, and that much of the art we consume will lose its fundamentally human qualities. But human-made art and human artists will never become obsolete—that would require fundamentally eliminating human impulses and the existence of human-made art. The challenge is making sure that artistic creation is not relegated to the margins of life.

Countering a Brutal Job Market with AI

Headlines surfaced by a simple “job market” search describe it as “a humiliation ritual” or “hell” and “an emerging crisis for entry-level workers.” The unemployment rate in the US for recent graduates is at an “unusually high” 5.8%—even Harvard Business School graduates have been taking months to find work. Inextricable from this conversation is the complication of AI’s potential to automate entry-level jobs, and as a tool for employers to evaluate applications. But the widespread availability of generative AI platforms begs an overlooked question: How are job seekers themselves using AI?

An interview study with upcoming master’s graduates at an elite UK university* sheds some light. In contrast to popular narratives about “laziness” or “shortcuts,” AI use comes from job seekers trying to strategically tackle the digitally saturated, competitive reality of today’s job market. Here are the main takeaways:

They Use AI to Play an Inevitable Numbers Game

Job seekers described feeling the need to apply to a high volume of jobs because of how rare it is to get a response amid the competition. They send out countless applications on online portals and rarely receive so much as an automated rejection email. As Franco, a 29-year-old communications student put it, particularly with “LinkedIn and job portals” saturating the market, his CV is just one “in a spreadsheet of 2,000 applicants.”

This context underlies how job seekers use AI, which allows them to spend less time on any given application by helping to cater résumés or write cover letters and thus put out more applications. Seoyeon, a 24-year-old communications student, describes how she faced repeated rejections no matter how carefully she crafted the application or how qualified she was.

[Employers] themselves are going to use AI to screen through those applications….And after a few rejections, it really frustrates you because you put in so much effort and time and passion for this one application to learn that it’s just filtered through by some AI….After that, it makes you lean towards, you know what, I’m just gonna put less effort into one application but apply for as many jobs as possible.

Seoyeon went on to say later that she even asks AI to tell her what “keywords” she should have in her application in light of AI in hiring systems.

Her reflection reveals that AI use is not a shortcut, but that it feels like a necessity to deal with the inevitable rejection and AI scanners, especially in light of companies themselves using AI to read applications—making her “passion” feel like a waste.

AI as a Savior to Emotional Labor

The labor of applying to jobs and dealing with constant rejection and little human interaction makes it a deeply emotional process that students describe as “draining” and “torturing,” which illuminates that AI is a way to reduce not just the time of labor but the emotional aspect of it.

Franco felt that having to portray himself as “passionate” for hundreds of jobs that he would not even hear back from was an “emotional toll” that AI helped him manage.

Repeating this process to a hundred job applications, a hundred job positions and having to rewrite a cover letter in a way that sounds like if it was your dream, well I don’t know if you can have a hundred dreams.…I would say that it does have an emotional toll….I think that AI actually helps a lot in terms of, okay, I’m going to help you do this cover letter so you don’t have to mentally feel you’re not going to get the shot.

Using AI thus acted as a buffer for the emotional difficulties of being a job seeker, allowing students to conserve mental energy in a grueling process while still applying to many jobs.

The More Passionate They Are, the Less AI They Use

AI use was not uniform by any means, even though the job application process often requires the same materials. Job seekers had “passion parameters” in place, where they dial down their use for a job that they were more passionate about.

Joseph, a 24-year-old psychology student, put this “human involvement” as “definitely more than 50%” for a role he truly desires, whereas for a less interesting role, it’s about “20%–30%.” He differentiates this by describing how, when passion is involved, he does deep research into the company as opposed to relying on AI’s “summarized, nuanced-lacking information,” and writes the cover letter from scratch—only using AI to be critical of it. In contrast, for less desirable jobs, AI plays a much more generative role in creating the initial draft that he then edits.

This points to the fact that while AI feels important for labor efficiency, students do not use it indiscriminately, especially when passion is involved and they want to put their best foot forward.

They Understand AI’s Flaws (and Work Around Them)

In their own words, students are not heedlessly “copying and pasting” AI-generated materials. They are critical of AI tools and navigate them with their concerns in mind.

Common flaws in AI-generated material include sounding “robotic” and “machine-like,” with some “AI” sounding words including “explore” and “delve into.” Joseph asserted that he can easily tell which one is written by a human, because AI-generated text lacks the “passion and zeal” of someone who is genuinely hungry for the job.

Nandita, a 23-year-old psychology student, shared how AI’s tendency to “put you on a pedestal” came through in misrepresenting facts. When she asked AI to tailor her résumé, it embellished her experience of “a week-long observation in a psychology clinic” into “community service,” which she strongly felt it wasn’t—she surmised this happened because community service was mentioned in the job description she fed AI, and she caught it and corrected it.

Consequently, using AI in the job hunt is not a passive endeavor but requires vigilance and a critical understanding to ensure its flaws do not hurt you as a job seeker.

They Grapple with AI’s Larger Implications

Using AI is not an unconditional endorsement of the technology; all the students were cognizant of (and worried about) its wider social implications.

John, a 24-year-old data science student, drew a distinction between using AI in impersonal processes versus human experiences. While he would use it for “a cover letter” for a job he suspects will be screened by AI anyway, he worries how it will be used in other parts of life.

I think it’s filling in parts of people’s lives that they don’t realize are very fundamental to who they are as humans. One example I’ve always thought of is, if you need it for things like cover letters, [that]s OK] just because it’s something where it’s not very personal.…But if you can’t write a birthday card without using ChatGPT, that’s a problem.

Nandita voiced a similar critique, drawing on her psychology background; while she could see AI helping tasks like “admin work,” she worries about how it would be used for therapy. She argues that an AI therapist would be “100% a Western…thing” and would fail to connect with someone “from the rural area in India.”

The understanding of AI shows that graduates differentiate using it for impersonal processes, like job searching in the digital age, from more human-to-human situations where it poses a threat.

Some Grads Are Opting Out of AI Use

Though most people interviewed were using AI, some rejected it entirely. They voiced similar qualms that AI users had, including sounding “robotic” and not “human.” Julia, a 23-year-old law student, specifically mentioned that her field requires “language and persuasiveness,” with “a human tone” that AI cannot replicate, and that not using it would “set you apart” in job applications.

Mark, a 24-year-old sociology student, acknowledged the same concerns as AI users about a saturated online arms race, but instead of using AI to send out as many applications as possible, had a different strategy in mind: “talking to people in real life.” He described how he once secured a research job through a connection in the smoking area of a pub.

Importantly, these job seekers had similar challenges with the job market as AI users, but they opted for different strategies to handle it that emphasize human connection and voice.

Conclusion

For graduate job seekers, AI use is a layered strategy that is a direct response to the difficulties of the job market. It is not about cutting corners but carefully adapting to current circumstances that require new forms of digital literacy.

Moving away from dialogue framing job seekers as lazy or unable to write their own materials forces us to look at how the system itself can be improved for applicants and companies alike. If employers don’t want AI use, how can they create a process that makes room for human authenticity as opposed to AI-generated materials that sustain the broken cycle of hiring?

*All participant names are pseudonyms.

❌