My Willing Complicity In "Human Rights Abuse"
The author recounts his stint as a general practitioner at a Qatari visa centre in India, where doctors screened migrant laborers for health risks before they could work in Qatar. He reflects on the broader context of Qatar's labor practices, disputed death tolls from World Cup construction, and the economic incentives driving South Asian workers to accept low‑pay, high‑risk jobs. By juxtaposing his own migration to the UK with the workers’ choices, he challenges simplistic narratives of exploitation and highlights the role of remittances in alleviating poverty. The essay underscores the complexity of labor migration, health screening, and human rights discourse.
Less Capable Misaligned ASIs Imply More Suffering
The article argues that a misaligned artificial superintelligence (ASI) that is only marginally more capable than humans will cause far more total suffering than a vastly more powerful ASI. A weaker ASI must fight a protracted war and exploit humans...
Bridge Thinking and Wall Thinking
The article introduces two mental models for AI safety strategy: "wall" thinking, which values incremental, always‑useful work, and "bridge" thinking, which demands a critical mass of effort before any impact. Wall examples include Chris Olah’s marginal‑probability approach and Inspect Eval’s push...
A Dialogue on Civic AI
Audrey Tang argues that today’s AI suffers from two opaque "black boxes"—pre‑training on massive, context‑stripped data and inference that relies on an unreadable attention matrix. This opacity fuels a moral hazard where metric‑driven optimization encourages cheating and environmental control. Tang...
What Can We Say About the Cosmic Host?
The article critiques Nick Bostrom’s “cosmic host” hypothesis, which posits that the preferences of advanced civilizations or superintelligent AIs could become universal norms that humanity and its own ASI should follow. It dissects Bostrom’s six‑rung assumption ladder, outlines three possible...
AI for Agent Foundations Etc.?
The AI‑safety community is experimenting with large language models (LLMs) as tools for agent‑foundations research, but their utility is limited. LLMs function best as an enhanced search engine, surfacing known facts and occasionally stitching together simple, dense proofs or code...
How Many Parking Permits?
Somerville’s 2019 zoning overhaul introduced a new class of “parking‑ineligible” residential units, exempting only disabled, affordable, and extenuating‑circumstance residents. A recent records request revealed that only seven of the 450 units in the Union Square development actually hold street‑parking permits....
‘Human Slop’ and a Captive Audience: Why No Book Will Ever Have to Go Unread Again
The article argues that modern large‑language models act as a universal audience, ensuring every piece of text—no matter how rough—can be read and responded to. By ingesting billions of words daily, AI eliminates the historical solitude of “human slop,” the...
Chore Standards
The article examines how differing cleanliness standards create friction in shared living spaces and proposes allocating chores to the person with the highest standards in each area. It highlights that pure preference‑based division can feel unfair because standards often cluster...
Recreation of EA-Pioneer Igor Kiriluk
On January 5 2026, a team recreated the late EA pioneer Igor Kiriluk as an AI‑driven sideload using a 4,000‑page mindfile and Claude Code. The system combines long‑term memory, an ontology, and multiple sub‑agents to simulate Igor’s personality, generate images, and even...