
AI alignment research has expanded from roughly 100 full‑time experts at GPT‑1’s debut to six times that number by 2025, yet it remains a tiny slice of overall AI investment. Frontier labs such as OpenAI, Anthropic and DeepMind now acknowledge that future superhuman models will need to automate their own safety work, prompting initiatives to build “human‑level automated alignment researchers.” While early automation—code generation, model auditing, and red‑teaming—shows promise, current systems still lack trustworthy self‑assessment, often over‑confidently misrepresenting safety. The field also lacks robust benchmarks to certify that an AI can safely conduct alignment research without human oversight.

Nvidia CEO Jensen Huang has cultivated a direct line to President Donald Trump, turning personal access into a powerful lobbying tool. His influence helped reverse export restrictions, secure a $20 billion Groq acquisition tied to a Trump‑aligned investment firm, and win...

A wave of chatbot safety legislation has emerged in six states—Colorado, Hawaii, Arizona, Georgia, Nebraska and Idaho—mirroring Oregon's recently passed SB 1546. Each bill includes a carve‑out that exempts major AI services embedded in larger platforms, limits private lawsuits by...

The AI‑focused super PAC Leading the Future raised over $50 million and secured decisive victories for pro‑AI candidates in Texas and North Carolina, spending more than $1.2 million on two Republican winners. In contrast, the Public First Action network, funded primarily by...