
Hannes Bajohr - Making Worlds in Novels and LLMs
The talk by Hannes Bajohr explores how large language models (LLMs) and novels both construct "worlds" through sequential text generation. He begins by referencing recent research that treats navigation in Manhattan as a deterministic finite automaton, showing that LLMs can learn local structure yet produce globally incoherent maps. Extending this analogy to narrative, he asks whether LLM‑generated stories encode a comparable internal world model. Bajohr argues that interdisciplinary analysis—combining computer science, literary theory, and philosophy—uncovers shared structural challenges. He outlines his own practice of fine‑tuning an open‑source model on German novels to co‑author "Berlin, Miami," a work that, while stylistically intriguing, displays persistent disjointedness. This empirical case illustrates that LLMs can produce texts that hang together enough to be read as novels, even though their underlying world representation remains fragmented. He draws on Hans Blumenberg’s philosophy to define the novel as a modern, relational model of reality, emphasizing structure over content. By juxtaposing this definition with AI’s statistical language generation, Bajohr highlights how AI forces scholars to make explicit the assumptions about meaning, style, and coherence embedded in both human and machine narratives. The resulting dialogue suggests a nascent "artificial humanities" where literary concepts refine AI models and vice versa. The implications are twofold: creators must navigate new forms of co‑authorship and attribution, while scholars gain a novel lens to critique and improve LLMs’ narrative capacities. Understanding the limits of LLM world‑building informs both the development of more coherent generative systems and the cultural reception of AI‑augmented literature.

Genevieve Smith - What Gets Encoded: AI, Inequity, and Alternative Technological Futures
Genevieve Smith, founder of the Responsible AI Initiative at Berkeley’s AI Lab, delivered a talk titled “What Gets Encoded: AI, Inequity, and Alternative Technological Futures.” She argued that AI systems are not neutral; they embed existing social hierarchies and can...