• Join Administrata today and get 15 free posts!

    Register now and claim a free content order to boost your community activity instantly.

    Register Now

Reddit will tighten verification to keep out human-like AI bots (1 Viewer)

Cpvr

Community Advisor
Administrator
Redditors around the world were scandalized last week after learning that a team of researchers released a swarm of AI-powered, human-impersonating bots on the “Change My View” subreddit. The large-scale experiment was designed to explore just how persuasive AI can be.

The bots posted over 1,700 comments, adopting personas like abuse survivors or controversial identities like an anti-Black Lives Matter advocate.

For Reddit, the incident was a mini-nightmare. Reddit’s brand is associated with authenticity — a place where real people come to share real opinions. If that human-focused ecosystem is disturbed with AI slopor becomes a place where people can’t trust that they’re getting information from actual humans, it could do more than threaten Reddit’s core identity. Reddit’s bottom line could be at stake, since the company now sells its content to OpenAI for training.

The company condemned the “improper and highly unethical experiment” and filed a complaint with the university that ran it. But that experiment was only one of what will likely be many instances of generative AI bots pretending to be humans on Reddit for a variety of reasons, from the scientific to the politically manipulative.

To protect users from bot manipulation and “keep Reddit human,” the company has quietly signaled an upcoming action — one that may be unpopular with users who come to Reddit for another reason: anonymity.

On Monday, Reddit CEO Steve Huffman shared in a post that Reddit would start working with “various third-party services” to verify a user’s humanity. This represents a significant step for a platform that has historically required almost no personal information for users to create an account.

“To keep Reddit human and to meet evolving regulatory requirements, we are going to need a little more information,” Huffman wrote. “Specifically, we will need to know whether you are a human, and in some locations, if you are an adult. But we never want to know your name or who you are.”

Social media companies have already started implementing ID checks after at least nine states and the U.K. and passed laws mandating age verification to protect children on their platforms.)

A Reddit spokesperson declined to explain under what circumstances the company would require users to go through a verification process, though they did confirm that Reddit already takes measures to ban “bad” bots. The spokesperson also wouldn’t share more details about which third-party services the company would use or what kind of personally identifying information users would have to offer up.

Many companies today rely on verification platforms like Persona, Alloy, Stripe Identity, Plaid, and Footprint, which usually require a government-issued ID to verify age and humanity. Then there’s the newer and more speculative tech, like Sam Altman’s Tools for Humanity and its eye-scanning “proof of human” device.

Opponents to ID checks say there are data privacy and security risks to sharing your personal information with social media platforms. That’s especially true for a platform like Reddit, where people come to post experiences they maybe never would have if their names were attached to them.

It’s not difficult to imagine a world in which authorities might subpoena Reddit for the identity of, for example, a pregnant teen asking about abortion experiences on r/women in states where it is now illegal. Just look how Meta handed over private conversationsbetween a Nebraska woman and her 17-year-old daughter, which discussed the latter’s plans to terminate a pregnancy. Meta’s assistance led law enforcement to acquire a search warrant, which resulted in felony charges for both the mother and daughter.

That’s exactly the risk Reddit hopes to avoid by tapping outside firms to provide “the essential information and nothing else,” per Huffman, who emphasized that “we never want to know your name or who you are.”

“Anonymity is essential to Reddit,” he said.

The CEO also noted that Reddit would continue to be “extremely protective of your personal information” and “will continue to push back against excessive or unreasonable demands from public or private authorities.”

Source: https://techcrunch.com/2025/05/06/reddit-will-tighten-verification-to-keep-out-human-like-ai-bots/
 
This was spotted on the Redditalternatives forum: https://www.reddit.com/r/RedditAlternatives/s/yqDDiqGhVI

Reddit and internet changes

I’m sorry, I don’t know your community that well, but I was banned off of r/futurology for this post.

I’m going to say what we all know.

Reddit has undergone a shift, and the presence of bots/AI-generated content, propaganda, and coordinated campaigns is a documented reality.

1. Bots & AI Are Proliferating on Reddit
- Automated Accounts: Tools like ChatGPT make it trivial to generate human-like comments/posts at scale. Farms deploy these to farm karma, sway discussions, or spam.
- Karma Farms: Bots repost popular old content/comments to gain karma, then get sold for propaganda or advertising.
- Detection Difficulty: Reddit’s anti-bot systems struggle to keep up, and AI can now mimic writing styles flawlessly.

2. Astroturfing & Propaganda Are Common.
- Corporate/PR Influence: Companies use bots or paid accounts to promote products, downplay scandals, or attack critics (e.g., gaming, tech, or finance subs).
- Political Manipulation: State actors (Russia, China, Iran, etc.) and domestic groups manipulate narratives on news/political subs. The "news" you see may be amplified or distorted.
- Agenda-Driven Subreddits: Entire communities are sometimes covertly run by ideological/political groups to push narratives.

3. Reddit is Less "Human"
- Algorithmic Incentives: Reddit rewards engagement (upvotes, controversy). Bots/farms exploit this, drowning out organic discussion.
- Decline of "Old Reddit" Culture: As Reddit commercialized (IPO, API changes), authentic communities shrank. Heavy users remained, leaving voids bots fill.
- News Aggregation Risks: Reddit is now a top news source, but unvetted. Bots can spread misinformation rapidly via upvote manipulation.

4. How to Spot Suspicious Activity
- Account Red Flags:
Generic usernames (e.g., "Word_Number123").
Sudden activity bursts after months of silence.
Overly polished, emotionless, or repetitive language.
- Post Patterns:
Rapid, identical comments across threads.
Posts pushing niche agendas (e.g., crypto, supplements, fringe politics).
"Outrage bait" designed to provoke engagement.
- Subreddit Anomalies:
Sudden shifts in moderation or topic focus.
Highly polarized discussions with lack of fact checking.

5. Protecting Your Trust
- Cross-Check Sources: Treat Reddit "news" as a lead, not truth. Verify via established outlets (AP, Reuters).
- Stick to Niche Communities: Smaller, topic-specific subs (e.g., hobbies, academics) have fewer bots.
- Use Tools: Browser extensions like "Reddit Investigator" or "Bot Sentinel" analyze account behavior.
- Question Consensus: If a thread feels unnaturally polarized or amplified, disengage.

Bottom Line:

Reddit is no longer a purely organic space. While genuine human interaction still exists (especially in smaller communities), the platform is saturated with manipulation. This doesn’t mean "all Reddit is bots," but it does mean healthy skepticism is essential. Trust should be earned through consistent, transparent behavior—not assigned by default.

Stay critical, don’t give out trust, and prioritize subs with active, transparent moderation. The degradation here reflects a broader internet crisis—awareness is the first defense.

Admission: I did use a language model (AI) to collect and format this information. That doesn’t change that it took personal effort, thought, and intention to make this post. and it definitely doesn’t change my message.
 

Users who are viewing this thread

Back
Top