Generative AI Podcasts Are Here. Prepare to Be Bored

Inside the strange world of podcasts made by artificial intelligence, where even creators aren’t sure who wants to hear robot chit-chat.
Digitally generated blue microphone on an orange background
Illustration: mphillips007/Getty Images

Here’s the thing about podcasts: There are too many of them. 

More than 4 million, to be precise, according to the database Podcast Index. In the past three days alone, nearly 103,000 individual podcast episodes were published online, a deluge of audio content so voluminous that listeners need never run out of options. You could spend the rest of your life working through the existing true crime catalog on Apple Podcasts or the sports chat shows on Spotify and end up dying of old age in 2070 while Michael Barbaro reads an ad for Mailchimp to your corpse. 

In the ongoing generative AI gold rush, though, opportunistic entrepreneurs are looking for entry into even the most saturated markets. A wave of startups, including ElevenLabs, WondercraftAI, and Podcastle, have introduced easy-to-use tools to generate AI voices in minutes. So, as if on cue, AI podcasts are here, whether anyone asked for them or not. 

In these early days, nobody’s keeping track of how many listeners this strange new genre of podcast has. Major hubs like Apple Podcasts and Spotify don’t have separate charts for robot hosts. There are, however, a few individual AI podcasts that have clearly found audiences, at least for their first crop of episodes. 

The first AI-generated podcast to take off cheated a little—it used the cloned voice of the world’s most popular human podcast host. The Joe Rogan AI Experience is a series of simulations of Rogan gabbing with (equally fake) guests like OpenAI CEO Sam Altman and former president Donald Trump. Shortly after the first episode came out, the real Rogan tweeted a link to it. “This is going to get very slippery, kids,” he wrote. 

On YouTube, the dupe racked up more than half a million views. Some listeners didn’t even care that it was AI. “This is actually good enough for me. Good stuff,” one wrote. 

The Joe Rogan AI Experience was created by a Rogan fan named Hugo. (He declined to give WIRED his full name because he does not want to be professionally associated with the project.) He has a Patreon to support production of the show and recently turned on monetization on YouTube, but he doesn’t expect to make any real income off it—especially as he’s aware that he doesn’t have consent to use Rogan’s voice or likeness, and that podcasting platforms may end up banning this type of impersonation. 

Hugo created the series because he wanted to showcase what AI voice tools can do. Although he carefully edits the episodes to make them flow for listeners—they can take days or weeks to get right—he doesn’t think the conversations themselves are particularly enthralling, even if they’re reasonably accurate imitations. “Apart from listening to the podcast because of its technological advancement, there’s no point,” Hugo says. “It’s just wasted time.” 

It’s unclear whether the audience will hang around, or if they simply wanted to check out something unusual and new; Hugo has released four episodes, and each subsequent installment has pulled a smaller audience than the last. 

WIRED spoke with several other creators of AI-generated podcasts who echoed Hugo's take. They enjoyed playing around with the technology, but they consider the end results a byproduct of experimentation. Israel-based sound engineer Lior Sol, for example, created a trippy podcast called Myself, I Am and That using ElevenLabs’ tools. He made a clone of his voice and then a clone of that clone in an extremely meta conversation. “I’m definitely having fun with it,” he says. But that doesn’t mean he’s chasing big audiences. Right now, his listeners number in the dozens. His friends like it, he likes it—it’s an art project, and a chance to fiddle around with new tech, not an attempt to make something commercial. 

Some other creators don’t even expect audiences to like their output, especially once the novelty wears off. Andi Durrant, for example, helped create an AI-generated podcast called Synthetic Stories at his UK-based content marketing startup. In addition to featuring cloned host voices, every other element of Synthetic Stories is AI-generated, including the script and sound design. “We were proud of it as an experiment,” Durrant says. As a creative work, though? “You really quickly get the limitations.” 

However, Dimitris Nikolaou, CEO of the AI podcasting startup WondercraftAI, believes that audiences could develop loyalty to AI-generated podcasts. His team created Hacker News Recap, which offers daily short summaries of the top stories on the Y Combinator-run forum Hacker News, as a proof of concept to show what his platform can do. It’s currently sitting at No. 31 in Apple Podcasts’ tech chart in the US. (Elsewhere, it’s performing even better. “We’re currently number two in Latvia for some reason,” Nikolaou says.) 

Nikolaou doesn’t think that Hacker News Recap’s AI-generated scripts are superior to those written by humans, or its artificial voices more melodic. “There’s nothing special to it. It’s the same content you’d find in any other tech podcast,” he says. “It’s more the fact that we can be so consistent and publish every morning, no matter what.” 

The podcast is designed to showcase how Wondercraft’s services work: Both the script and audio are AI-generated based on whatever posts appear at the top of Hacker News. (Wondercraft got Y Combinator’s permission to use its content, which is not particularly surprising; the startup incubator is also one of its investors.) For people who just want an information digest in audio form, it’s a consistent offering. 

He also believes Wondercraft will appeal to some independent creative types, like newsletter writers who might want to put out an audio version of their blog posts but don’t have the time to do it themselves or the money to hire a reader.

Human podcasters have already started embracing AI editing tools, which are frequently used by major podcasting studios. These tools can simplify tasks like removing background noise or clarifying mumbled words. And some are playing around with the idea of cloning their voices for advertisements. This week, for example, The Ringer founder Bill Simmons discussed the possibility of developing ads read by AI-generated voice clones of the hosts for his stable of Spotify podcasts. 

Wholly AI-generated presenters, though, are another story altogether.

Who? Weekly cohosts Bobby Finger and Lindsey Weber see the potential use cases for AI editing tools, but they don’t foresee AI voice-generating tools holding any real value for their long-running beloved podcast. “The only way it would make sense is in a literal joke,” Finger says. “It’s not convincing.” 

Kelsey McKinney, the host of the recent breakout hit Normal Gossip, is skeptical that AI-generated podcasts will connect with audiences in a lasting way. “The AI stuff, I just hate it, in every form,” she says. “People want to feel connected to other people. The reason podcasts are so popular is because listeners feel connected to the people who make them.” 

McKinney sees AI podcasts as part of a larger push by entertainment corporations to automate and devalue the arts—an effort that is being led by cost-cutting executives rather than creators. “They want to use AI for podcasts. They want to use AI for screenwriting. They want to use AI for actors,” she says. “What they’re trying to say is that they don’t want to pay creative people.”

Especially with podcasts like Who? Weekly and Normal Gossip—chatty, digressive, funny, weird—the core appeal of tuning in week after week is hearing what the specific humans at the microphone have to say. No matter how advanced the technology gets, the idea that a robot could fully replicate the experience is still pure science fiction. (Spike Jonze’s Her 2: Her Starts a Podcast coming to theaters in 2033.) 

That’s not to say that Nikolaou is off-base with Hacker News Recap; some people obviously do want summarized news articles read to them by a pleasant artificial voice devoid of personality. But summarized news articles do not represent the vast majority of popular podcasts. The medium is defined by intimacy, by listeners feeling like they are overhearing a chat between old friends, or sitting in the back of the room at a particularly brilliant panel. In her 2022 book Podcasting as an Intimate Medium, podcast researcher Alyn Euritt describes how listeners can come to see themselves as “members of an imagined national community.” In niches like news summary programs, a robot could suffice. But the podcast business isn’t built on information; it’s built on conversation

AI podcasts are a tiny bubble within the larger ballooning market for generative AI products and services, but they explain broader tensions within the young industry. The technology is simultaneously sophisticated and ersatz—it can produce sounds and visuals that pass for the real thing, so long as you’re not paying close attention, but get details totally wrong. And right now, discussions about AI’s impact are thoroughly distorted by hyperbole. We mistake attention-grabbing for paradigm-shifting. (Another comment on the fake Rogan podcast: “I no longer have to wonder what my grandparents felt like as they watched technology change their world.”) The words might be in the right order. But the tone’s just so damn flat.