
An avid fan fiction reader browsed trendy genres on AO3 website late at night to relax. (Credit: Kristen Shen)
One quiet midnight, Rose Ding returned to the Archive of Our Own, usually referred to as AO3, one of the world’s most popular fan fiction sites, to post a short story that celebrated her favorite anime character’s birthday. It was her first article after a year’s absence because she was buried in schoolwork. Nervous but excited, she refreshed the page again and again, waiting for the first comment.
But the first comment hit hard.
“It’s clear from the last few chapters that you’re not passionate about the story anymore,” the comment said. Ding said that this sentence was the “most hurtful part” of the message, denying her commitment to fan fiction.
“I was so sad and nervous at first,” Ding said. “I even thought about deleting my account.”
She shared the comment on RedNote, a popular Chinese social platform, and found she wasn’t alone. Other creators received the same message.
Because the wording of the comment is generic and lacks details connecting to the actual stories, Ding was suspicious that the comment might not have come from a person at all—but from a bot.
Ding isn’t alone. Since April 2024, creators on AO3 have received suspicious scam comments—repetitive, emotionally flat, and often irrelevant to the stories. According to discussions on Tumblr, AO3 board announcements, and posts on RedNote, these AI-generated comments have quietly infiltrated the site, disturbing the close-knit fan culture community.
The rise of generative AI has sparked new concerns across the fan fiction community, which values emotional connection and human creativity. Many are concerned that chatbot activity will disrupt the long-standing culture of mutual support and blur the concept of originality that the site has cultivated since its founding in 2009.
“We needed a non-commercial space that would be entirely devoted to the interests of fans as producers of non-commercial creative works,” said Rebecca Tushnet, a Harvard Law professor and one of the founders of the Organization for Transformative Works (OTW), the nonprofit that runs AO3. Tushnet is a current volunteer on AO3’s legal committee.
Launched in 2009, AO3 is a volunteer-run, ad-free platform where anyone can post fan fiction. It currently hosts more than 15 million fanworks and 8.8 million registered users, covering fandoms from Harry Potter to My Hero Academia, from Marvel to BTS. In 2019, it won a Hugo Award—one of science fiction’s highest honors—for Best Related Work, a historic recognition of fan labor as creative culture.
Long-term engagement in specific fandoms foster tight-knit circles. Readers often build and sustain the relationship through comments, often by leaving praise, offering suggestions, or engaging in conversations that stretch across multiple works.
“Thoughtful commenting fulfills a social need and facilitates the creative process,” said Regina Cheng, a research scientist in human-centered machine intelligence at Apple who holds a Ph.D. from the University of Washington. She has been interested in fan fiction since middle school and has published research on feedback exchange within fan fiction communities. As a writer, she said she reads every review that someone posts about her work and she often connects with frequent commenters across platforms for long-term beta reading or even in-person friendships.
But with the recent surge of chatbot activities, many writers are left unsure whether to trust the feedback, or even whether it came from a human.
Like the one Ding received, many of the comments accusing works of being AI-generated, oddly enough, are almost certainly AI-generated. These scam messages are often used to promote other AI tools, like AI essay editors or AI-detection services.
“It’s kind of ironic,” Ding said, “that these AI-generated scams are created to advertise other AI services.”
In a July AO3 board meeting, Qiao Chu, secretary of the OTW, acknowledged the problem. “We are aware that AI usage has contributed to an increase in spam,” Chu said. To combat the issue, AO3 has lowered its comment rate limit, particularly for guest accounts, and implemented new filters.
“The chance of a scam comment getting through now is one in a hundred,” Tushnet said.
The AO3 team has also begun to prioritize manual user traffic over bots and limited who can join the platform through its invitation-only registration, which typically takes an hour or more and requires further steps for real people verification. In a Tumblr post from July 15, AO3 confirmed that several of its committees, including Policy & Abuse and Systems, are actively working to address the issue. As many AI-generated spam are posted under guest accounts, the policy helps users flag likely bots or disable guest-commenting functions to reduce spam comments.

AO3 updated its Tumblr account, confirming bot activities and lowering its comment rate to reduce bot activities.
Still, the fixes aren’t without trade-offs. During recent board meetings, some users raised concerns about these features potentially inhibiting legitimate interactions. One reader complained that they never received her invitation link, and another said they were blocked while trying to leave comments on their own work.
The management team acknowledged the frustration and emphasized that the goal is to reduce spam while optimizing user experience and communications. “Our aim is to slow down the spammers with minimal impact on legitimate commenters,” Chu said.
As generative AI becomes more sophisticated and accessible, AO3 faces new challenges about machine-made content like scam comments and AI scraping. For the fan fiction community, the threat isn’t just about receiving trash comments, but about trust, authorship, and creativity sustained by human voices.
While most users are frustrated with navigating AI-generated content, the AO3 management team doesn’t plan to ban AI-generated content on the site.
“In general, it is not possible to conclusively determine when works are AI-generated unless the author discloses their use.” Chu said.
Attempts to detect AI-generated works can also unintended harm. According to Tushnet, AI-detection tools may misidentify content written by non-native English speakers or translated fanworks, further causing disadvantage to users speaking minor languages.
AO3’s technical team is working to improve systems that balance user experience with site safety. With the platform expanding rapidly, the Organization for Transformative Works plans to hire a small number of paid, full-time staff to help maintain and develop the site—marking a shift from its all-volunteer roots.
The experience hasn’t shaken Ding’s love for writing fan fiction. But she’s become more cautious interacting with other users based on comments.
“I still expect to see comments,” she said, referencing suspected AI-generated comments. ”But I really want to hear human voices.”
About the author(s)
Kristen Shen is a data journalist at Columbia Journalism School covering culture, business, and technology.
