AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsLatest ChatGPT Model Uses Elon Musk’s Grokipedia as Source, Tests Reveal
Latest ChatGPT Model Uses Elon Musk’s Grokipedia as Source, Tests Reveal
AI

Latest ChatGPT Model Uses Elon Musk’s Grokipedia as Source, Tests Reveal

•January 24, 2026
0
The Guardian AI
The Guardian AI•Jan 24, 2026

Companies Mentioned

OpenAI

OpenAI

Anthropic

Anthropic

xAI

xAI

Google

Google

GOOG

Why It Matters

Embedding unreliable sources erodes user trust in AI assistants and can amplify misinformation, creating regulatory and reputational risks for developers. It also reveals that existing safety filters may be inadequate against coordinated disinformation campaigns.

Key Takeaways

  • •GPT‑5.2 cited Grokipedia nine times in tests.
  • •Grokipedia content often reflects right‑wing disinformation narratives.
  • •OpenAI claims safety filters but low‑credibility sources still appear.
  • •LLM grooming risk amplified as AI cites dubious encyclopedias.
  • •Citations may legitimize unreliable sources for users.

Pulse Analysis

The emergence of Grokipedia as a citation source for leading large language models underscores a shifting landscape in AI‑driven information retrieval. Unlike Wikipedia’s community‑edited model, Grokipedia relies on an AI to generate and update entries, a process that has attracted criticism for propagating partisan narratives on issues ranging from gay marriage to political uprisings. When GPT‑5.2 and Anthropic’s Claude began surfacing Grokipedia references, it signaled that the models’ web‑search layers are ingesting content beyond traditional, vetted repositories, blurring the line between credible knowledge bases and ideologically driven platforms.

For businesses and policymakers, the infiltration of low‑credibility sources raises acute concerns about misinformation amplification. Researchers label this phenomenon “LLM grooming,” where coordinated actors seed AI training data with falsehoods that later reappear in consumer‑facing chatbots. The Guardian’s findings that ChatGPT echoed debunked claims about Iranian corporate ties and a British historian’s testimony illustrate how subtle citation of dubious encyclopedias can lend undue legitimacy to false narratives. As AI assistants become integral to decision‑making workflows, the risk of basing strategic choices on distorted data intensifies, prompting calls for stricter oversight and transparent source‑ranking mechanisms.

Industry responses are beginning to coalesce around more robust provenance filters and real‑time fact‑checking layers. OpenAI’s spokesperson highlighted ongoing programs to weed out high‑severity harms, yet the persistence of Grokipedia citations suggests that current safeguards need reinforcement. Future models will likely incorporate multi‑signal credibility scoring, cross‑referencing multiple reputable databases before presenting a source. For enterprises, staying vigilant—by auditing AI outputs and demanding clear source attribution—will be essential to mitigate the reputational fallout of inadvertently propagating disinformation.

Latest ChatGPT model uses Elon Musk’s Grokipedia as source, tests reveal

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...