SOPA Images via Getty Images
In a surprising revelation, researchers have admitted to quietly experimenting with AI-generated comments on Reddit users, raising fresh concerns about transparency, consent, and the influence of artificial intelligence in public spaces. The experiment, disclosed over the weekend by moderators of r/changemyview, was described by Reddit mods as “psychological manipulation” of users who had no idea they were being tested.
The experiment, conducted earlier this year, involved researchers inserting AI-generated comments into live Reddit threads without notifying users. “The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users,” the subreddit’s moderators wrote in a lengthy post notifying Redditors about the research. “This experiment deployed AI-generated comments to study how AI could be used to change views.” The comments were crafted using large language models and designed to mimic human conversation as naturally as possible. The goal was to study how people interact with AI in real-world online discussions and to assess whether users could distinguish between human-written and AI-written content.
Turns out, researchers were busy behind the scenes on Reddit’s r/changemyview, a massive subreddit where users post spicy takes and invite others to debate them. The researchers used large language models (LLMs) to cook up AI-generated comments, slipping them into discussions without anyone knowing. According to the sub’s moderators, the AI wasn’t just posting generic responses either — it pretended to be all sorts of people: a sexual assault survivor, a trauma counselor “specializing in abuse,” even a “Black man opposed to Black Lives Matter.” Yikes. While many of the original comments have been deleted, some receipts live on in an archive put together by 404 Media.
In a draft of their paper, the unnamed researchers brag about how they didn’t just generate random responses — they actually tried to personalize them by scraping users’ Reddit histories to guess things like gender, age, ethnicity, location, and political views. All without consent, of course.
The r/changemyview mods were understandably furious, pointing out that the researchers broke multiple subreddit rules, like the one that requires you to disclose if you’re using AI and the one that bans bots. They’ve officially complained to the University of Zurich (where the researchers are based) and asked that the paper not be published.
And it’s not just the mods who are mad — Reddit’s top lawyer, Ben Lee, weighed in too, calling the researchers’ actions “deeply wrong on both a moral and legal level,” and hinting that legal action might be on the table.
The researchers, for their part, defended the experiment, saying it had been greenlit by a university ethics committee and could actually help Reddit and other communities guard against more dangerous uses of AI. In a comment back to the r/changemyview mods, they admitted the study was an “unwelcome intrusion” and that they understood why people were upset — but argued the benefits outweighed the risks. Their logic? Better to run a low-risk, controlled test now than let bad actors use AI to do real damage later, like meddling in elections or spreading hate online.
Some experts are calling for stricter regulations on the deployment of AI in public digital spaces, warning that unchecked experiments could lead to long-term damage to online communities.
This incident comes at a time when public skepticism about AI’s role in media, entertainment, and social interaction is already rising. It highlights the pressing need for clear ethical guidelines, transparency, and accountability when integrating AI into environments where real people interact.
As AI continues to become more deeply woven into everyday digital life, the debate over how—and whether—AI should participate in public discourse is just beginning.

