AGI/ASI Timelines Thread (AGI/ASI May Solve Longevity if It Doesn't "Kill Us All" First)

AGI/ASI Timelines Thread (AGI/ASI May Solve Longevity if It Doesn't "Kill Us All" First)

Rapamycin News
Rapamycin NewsApr 21, 2026

Key Takeaways

  • 0.01% of Twitter users spread 80% of 2016 election misinformation.
  • Algorithms create “bespoke realities” that inflate fringe opinions into perceived norms.
  • Anti‑vaccine misinformation drives health‑risk behaviors despite scientific consensus.
  • Generative AI may amplify propaganda or improve information accuracy.
  • Information hygiene—source audits and algorithmic friction—reduces digital stress.

Pulse Analysis

The NYU‑Norwegian School of Economics paper, published in the elite Administrative Science Quarterly, quantifies the disproportionate power of a microscopic elite on social platforms. By tracing tweet activity from the 2016 U.S. election, the researchers show that roughly one in ten thousand users generated the majority of false narratives, leveraging algorithmic feedback loops to dominate feeds. This “tyranny of the minority” creates bespoke realities—personalized information silos where fringe ideas appear mainstream, eroding the shared factual base essential for coherent public discourse.

From a health‑span perspective, the distortion has tangible physiological costs. Anti‑vaccine misinformation, amplified by these hyper‑active accounts, fuels vaccine hesitancy, leading to preventable disease outbreaks and chronic stress among exposed populations. The constant barrage of emotionally charged content triggers cortisol spikes, which research links to accelerated aging and reduced longevity. Practitioners and longevity enthusiasts are urged to audit medical sources, prioritize peer‑reviewed data, and introduce digital friction—delays before sharing—to mitigate stress and protect health decisions from algorithmic bias.

Looking ahead, generative AI stands at a crossroads. Its capacity to generate persuasive narratives could either reinforce the existing tyranny by giving influencers more realistic propaganda tools, or it could serve as a corrective filter that nudges users toward verified information. Policymakers and platform designers must therefore embed transparency mechanisms, promote source verification, and support AI‑driven fact‑checking. By combining rigorous information hygiene with responsible AI deployment, society can reclaim a more balanced digital public square and safeguard both democratic processes and long‑term health outcomes.

AGI/ASI Timelines thread (AGI/ASI may solve longevity if it doesn't "kill us all" first)

Comments

Want to join the conversation?