Your Favourite Commenter Might Not Be Writing Their Own Comments

Your Favourite Commenter Might Not Be Writing Their Own Comments

Slow AI
Slow AI Apr 17, 2026

Key Takeaways

  • 1% of Substack commenters use AI agents, generating 3‑5% of comments.
  • Uniform 1:1 comment‑to‑post ratio signals automated engagement.
  • Turing tests and canary traps expose keyword‑only responses.
  • Synthetic comments dilute signal, potentially slowing subscriber growth.
  • Creators can detect bots via API scripts and keep human interaction.

Pulse Analysis

AI‑generated comments are emerging as a quiet but measurable force on Substack. In a five‑week audit of the Slow AI newsletter, the author scraped 4,929 comments from 139 posts and profiled 595 unique accounts. By examining the comment‑to‑post ratio, running live Turing questions, and planting canary traps, the study identified five accounts that rely on virtual assistants—either human freelancers or pure AI scripts—to post on their behalf. A perfectly uniform 1:1 ratio across dozens of posts proved to be the clearest fingerprint of automation, distinguishing synthetic engagement from the irregular bursts typical of genuine readers.

The presence of synthetic comments erodes the informational value of a discussion thread. Even though automated accounts represent less than one percent of commenters, they contribute up to five percent of total comment volume, inflating engagement metrics without delivering authentic feedback. For newsletter creators, this noise can obscure the true sentiment of their audience and weaken the conversion funnel that turns thoughtful comments into paid subscriptions. The phenomenon mirrors earlier cycles on platforms like X and LinkedIn, where monetized engagement incentives spurred the growth of bot farms and ghost‑comment services, ultimately degrading user trust.

Creators can safeguard their communities by auditing comment patterns and enforcing human‑first interaction. Substack’s public API makes it easy to extract comment data and calculate each commenter’s post‑ratio; a script that flags accounts with a constant 1:1 rate can be built in minutes using an LLM. Beyond detection, publishers should set clear policies that discourage automated replies and encourage genuine dialogue, while platforms might introduce friction such as CAPTCHA challenges for high‑frequency accounts. Maintaining a high signal‑to‑noise ratio preserves the credibility of the comments section and supports sustainable subscriber growth.

Your Favourite Commenter Might Not Be Writing Their Own Comments

Comments

Want to join the conversation?