AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsFormer OpenAI Policy Chief Launches Institute for Independent AI Safety Audits
Former OpenAI Policy Chief Launches Institute for Independent AI Safety Audits
AI

Former OpenAI Policy Chief Launches Institute for Independent AI Safety Audits

•January 19, 2026
0
THE DECODER
THE DECODER•Jan 19, 2026

Companies Mentioned

OpenAI

OpenAI

Anthropic

Anthropic

Google

Google

GOOG

Y Combinator

Y Combinator

Why It Matters

Independent audits could establish enforceable safety standards, reducing reliance on self‑regulation and limiting systemic AI risks. This shift may reshape liability, insurance underwriting, and regulatory approaches for AI providers.

Key Takeaways

  • •AVERI seeks independent audits for frontier AI models
  • •Raised $7.5M, targeting $13M for 14 staff
  • •Proposes AI Assurance Levels from limited to treaty‑grade
  • •Insurers may mandate audits for AI‑dependent businesses
  • •Industry insiders fund AVERI, indicating internal safety concerns

Pulse Analysis

The rapid rollout of large‑scale generative models has outpaced traditional safety oversight, leaving regulators and customers to trust manufacturers’ self‑assessments. Miles Brundage’s departure from OpenAI underscores a growing recognition that internal review processes lack the transparency and rigor needed for high‑impact AI systems. By establishing AVERI, Brundage is institutionalizing a third‑party verification model that mirrors audit practices in finance and pharmaceuticals, offering a credible counterweight to industry‑driven standards.

AVERI’s flagship contribution is a tiered AI Assurance framework that categorizes audits from Level 1, reflecting current limited testing, to Level 4, which delivers treaty‑grade assurance suitable for cross‑border governance. The framework, detailed in a paper co‑authored by over 30 AI safety experts, defines clear metrics, data‑access protocols, and reporting requirements. Funding of $7.5 million—spearheaded by former Y Combinator president Geoff Ralston and contributions from AI‑lab employees—signals both confidence in the model and an acknowledgment of internal safety concerns. The institute’s staffing plan positions it to conduct rigorous evaluations across multiple leading models.

Market forces are likely to accelerate adoption before any formal regulation arrives. Large enterprises integrating AI into mission‑critical workflows will demand audit certifications to mitigate operational risk, while insurers, eager to underwrite AI‑related policies, may condition coverage on verified safety assessments. Such commercial pressure could create de‑facto standards, compelling AI firms to submit to independent scrutiny. In this environment, AVERI’s independent audits could become a prerequisite for market entry, reshaping the competitive landscape and establishing a new benchmark for responsible AI deployment.

Former OpenAI policy chief launches institute for independent AI safety audits

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...