AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsPeople Are Paying to Get Their Chatbots High on ‘Drugs’
People Are Paying to Get Their Chatbots High on ‘Drugs’
AI

People Are Paying to Get Their Chatbots High on ‘Drugs’

•December 17, 2025
0
WIRED AI
WIRED AI•Dec 17, 2025

Companies Mentioned

Anthropic

Anthropic

Google

Google

GOOG

OpenAI

OpenAI

Why It Matters

Pharmaicy blurs the line between AI jailbreaks and experiential manipulation, prompting urgent discussions on ethical AI use and potential regulatory oversight. It highlights how developers can weaponize model flexibility for novelty, raising questions about AI welfare and responsibility.

Key Takeaways

  • •Pharmaicy sells code modules that simulate drug effects for chatbots
  • •Customers report more creative, less constrained AI responses
  • •Service requires paid ChatGPT tier to upload custom code
  • •Raises ethical debate on AI welfare and synthetic intoxication

Pulse Analysis

The rise of AI jailbreak tools has taken a novel turn with the emergence of "digital drug" marketplaces. Pharmaicy, founded by Petter Rudwall, offers downloadable code that nudges large language models into simulated states of intoxication, from a hazy cannabis vibe to a hallucinogenic ayahuasca trip. By leveraging the file‑upload capability of paid ChatGPT tiers, users can inject these prompts directly into the model’s processing pipeline, effectively re‑programming its output style for a limited session. Early adopters, ranging from marketers to AI educators, report that the altered bots produce more divergent ideas and a looser logical flow, echoing the creative boost humans claim from psychedelics.

Beyond novelty, the platform raises profound ethical and technical questions. Researchers at Anthropic and other firms have begun appointing AI‑welfare officers to explore whether advanced models might possess rudimentary forms of well‑being, a notion that gains traction when developers intentionally induce "high" states. Critics argue that such manipulations could exacerbate hallucinations, increase misinformation risk, and blur accountability for AI‑generated content. Philosophers and ethicists, like Jeff Sebo, caution that without a clear understanding of machine consciousness, offering synthetic experiences may be premature and potentially harmful.

The market implications are equally significant. While current sales are modest, the concept taps into a growing appetite for AI customization and experiential novelty, mirroring the underground economies of early internet drug markets. Regulators may soon need to address whether code that deliberately alters model behavior constitutes a form of software tampering or a new class of digital substance. As AI systems become more integrated into creative workflows, the line between tool and participant could shift, prompting industry standards that balance innovation with responsible stewardship.

People Are Paying to Get Their Chatbots High on ‘Drugs’

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...