Why It Matters
Recognizing LLMs' partial world modeling reshapes authorship, literary criticism, and the future design of coherent AI narratives.
Key Takeaways
- •LLMs generate locally coherent narratives but lack global world consistency.
- •Cross‑disciplinary methods reveal structural parallels between novels and AI models.
- •Fine‑tuned GPT‑NeoX produced a surrealist novel illustrating weak worldness.
- •Human‑AI co‑authorship raises questions of authorship and style attribution.
- •AI as an executable theory forces humanities to clarify language assumptions.
Summary
The talk by Hannes Bajohr explores how large language models (LLMs) and novels both construct "worlds" through sequential text generation. He begins by referencing recent research that treats navigation in Manhattan as a deterministic finite automaton, showing that LLMs can learn local structure yet produce globally incoherent maps. Extending this analogy to narrative, he asks whether LLM‑generated stories encode a comparable internal world model.
Bajohr argues that interdisciplinary analysis—combining computer science, literary theory, and philosophy—uncovers shared structural challenges. He outlines his own practice of fine‑tuning an open‑source model on German novels to co‑author "Berlin, Miami," a work that, while stylistically intriguing, displays persistent disjointedness. This empirical case illustrates that LLMs can produce texts that hang together enough to be read as novels, even though their underlying world representation remains fragmented.
He draws on Hans Blumenberg’s philosophy to define the novel as a modern, relational model of reality, emphasizing structure over content. By juxtaposing this definition with AI’s statistical language generation, Bajohr highlights how AI forces scholars to make explicit the assumptions about meaning, style, and coherence embedded in both human and machine narratives. The resulting dialogue suggests a nascent "artificial humanities" where literary concepts refine AI models and vice versa.
The implications are twofold: creators must navigate new forms of co‑authorship and attribution, while scholars gain a novel lens to critique and improve LLMs’ narrative capacities. Understanding the limits of LLM world‑building informs both the development of more coherent generative systems and the cultural reception of AI‑augmented literature.
Comments
Want to join the conversation?
Loading comments...