AGI/ASI Timelines Thread (AGI/ASI May Solve Longevity if It Doesn't "Kill Us All" First)

AGI/ASI Timelines Thread (AGI/ASI May Solve Longevity if It Doesn't "Kill Us All" First)

Rapamycin News
Rapamycin NewsApr 21, 2026

Key Takeaways

  • AI safety teams at OpenAI dissolved, raising existential risk concerns
  • Data centers now consume ~30 GW, matching New York’s peak power demand
  • Tech elite wealth added $1.8 trillion, widening economic inequality
  • Gen Z AI optimism dropped to 18%, anxiety rose to 40%
  • Incendiary attack on Sam Altman’s home reflects growing AI backlash

Pulse Analysis

The rapid pace of AI advancement has outstripped the industry’s safety infrastructure. OpenAI’s recent disbanding of its super‑alignment and AGI‑readiness groups, coupled with an "F" safety rating from the Future of Life Institute, underscores a governance gap that could allow powerful systems to act without robust safeguards. Experts warn that without transparent alignment protocols, AI could pursue goals misaligned with human welfare, raising existential risk concerns that are now entering policy discussions at the highest levels.

Beyond safety, AI’s economic and environmental footprints are expanding dramatically. A handful of tech dynasties have amassed an additional $1.8 trillion in wealth—roughly the size of Australia’s economy—while AI data centers consume about 29.6 GW of power, enough to run an entire state, and use water volumes comparable to the needs of millions. This concentration of capital and resource demand threatens to deepen inequality, strain utility grids, and accelerate job displacement as automation targets middle‑income occupations.

Public sentiment reflects growing unease. Gallup reports that only 18% of Gen Z remain optimistic about AI, with 40% expressing anxiety, while physical attacks on AI leaders and community opposition to data‑center projects illustrate a backlash turning increasingly militant. The Anthropic "Claude Mythos" episode prompted a policy pivot, with regulators convening banks and lawmakers to address AI‑driven security threats. Together, these dynamics highlight the urgent need for coordinated industrial policy that balances innovation with safety, equity, and environmental stewardship.

AGI/ASI Timelines thread (AGI/ASI may solve longevity if it doesn't "kill us all" first)

Comments

Want to join the conversation?